2025-09-03 00:00:09.259247 | Job console starting 2025-09-03 00:00:09.274166 | Updating git repos 2025-09-03 00:00:09.767724 | Cloning repos into workspace 2025-09-03 00:00:10.098387 | Restoring repo states 2025-09-03 00:00:10.131243 | Merging changes 2025-09-03 00:00:10.131265 | Checking out repos 2025-09-03 00:00:10.720075 | Preparing playbooks 2025-09-03 00:00:12.006136 | Running Ansible setup 2025-09-03 00:00:19.590470 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-03 00:00:21.661502 | 2025-09-03 00:00:21.661641 | PLAY [Base pre] 2025-09-03 00:00:21.674635 | 2025-09-03 00:00:21.674734 | TASK [Setup log path fact] 2025-09-03 00:00:21.702243 | orchestrator | ok 2025-09-03 00:00:21.716562 | 2025-09-03 00:00:21.716670 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-03 00:00:21.790130 | orchestrator | ok 2025-09-03 00:00:21.816596 | 2025-09-03 00:00:21.816765 | TASK [emit-job-header : Print job information] 2025-09-03 00:00:21.878690 | # Job Information 2025-09-03 00:00:21.878946 | Ansible Version: 2.16.14 2025-09-03 00:00:21.878989 | Job: testbed-deploy-in-a-nutshell-with-tempest-ubuntu-24.04 2025-09-03 00:00:21.879039 | Pipeline: periodic-midnight 2025-09-03 00:00:21.879068 | Executor: 521e9411259a 2025-09-03 00:00:21.879089 | Triggered by: https://github.com/osism/testbed 2025-09-03 00:00:21.879111 | Event ID: 651a6061cbd2411d9f956be0cd8ab119 2025-09-03 00:00:21.886063 | 2025-09-03 00:00:21.886174 | LOOP [emit-job-header : Print node information] 2025-09-03 00:00:22.186876 | orchestrator | ok: 2025-09-03 00:00:22.187019 | orchestrator | # Node Information 2025-09-03 00:00:22.187046 | orchestrator | Inventory Hostname: orchestrator 2025-09-03 00:00:22.187066 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-03 00:00:22.187085 | orchestrator | Username: zuul-testbed06 2025-09-03 00:00:22.187101 | orchestrator | Distro: Debian 12.11 2025-09-03 00:00:22.187120 | orchestrator | Provider: static-testbed 2025-09-03 00:00:22.187137 | orchestrator | Region: 2025-09-03 00:00:22.187154 | orchestrator | Label: testbed-orchestrator 2025-09-03 00:00:22.187170 | orchestrator | Product Name: OpenStack Nova 2025-09-03 00:00:22.187186 | orchestrator | Interface IP: 81.163.193.140 2025-09-03 00:00:22.201690 | 2025-09-03 00:00:22.201812 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-03 00:00:23.245875 | orchestrator -> localhost | changed 2025-09-03 00:00:23.252258 | 2025-09-03 00:00:23.252355 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-03 00:00:25.222317 | orchestrator -> localhost | changed 2025-09-03 00:00:25.233373 | 2025-09-03 00:00:25.233472 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-03 00:00:25.852830 | orchestrator -> localhost | ok 2025-09-03 00:00:25.858355 | 2025-09-03 00:00:25.858440 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-03 00:00:25.895954 | orchestrator | ok 2025-09-03 00:00:25.925587 | orchestrator | included: /var/lib/zuul/builds/9b30fd534a0f43c8b8a0305e86d4e4b7/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-03 00:00:25.961340 | 2025-09-03 00:00:25.961447 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-03 00:00:28.092436 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-03 00:00:28.092615 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/9b30fd534a0f43c8b8a0305e86d4e4b7/work/9b30fd534a0f43c8b8a0305e86d4e4b7_id_rsa 2025-09-03 00:00:28.092649 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/9b30fd534a0f43c8b8a0305e86d4e4b7/work/9b30fd534a0f43c8b8a0305e86d4e4b7_id_rsa.pub 2025-09-03 00:00:28.092671 | orchestrator -> localhost | The key fingerprint is: 2025-09-03 00:00:28.092694 | orchestrator -> localhost | SHA256:ooy9DTpB4Xu282efuvGp89pfduMXD8jKzAfRoeVCk8E zuul-build-sshkey 2025-09-03 00:00:28.092713 | orchestrator -> localhost | The key's randomart image is: 2025-09-03 00:00:28.092740 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-03 00:00:28.092759 | orchestrator -> localhost | | ..o | 2025-09-03 00:00:28.092777 | orchestrator -> localhost | | . E o | 2025-09-03 00:00:28.092795 | orchestrator -> localhost | | . . . * . | 2025-09-03 00:00:28.092811 | orchestrator -> localhost | | o + o | 2025-09-03 00:00:28.092828 | orchestrator -> localhost | | . . . S + . | 2025-09-03 00:00:28.092848 | orchestrator -> localhost | | o+o. . . o .. | 2025-09-03 00:00:28.092865 | orchestrator -> localhost | | .+=. .+ o oo+| 2025-09-03 00:00:28.092881 | orchestrator -> localhost | | ..o+ ++=o.o..+| 2025-09-03 00:00:28.092898 | orchestrator -> localhost | | ...ooo=OBo. ..| 2025-09-03 00:00:28.092915 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-03 00:00:28.092955 | orchestrator -> localhost | ok: Runtime: 0:00:00.699417 2025-09-03 00:00:28.099161 | 2025-09-03 00:00:28.099242 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-03 00:00:28.141158 | orchestrator | ok 2025-09-03 00:00:28.159724 | orchestrator | included: /var/lib/zuul/builds/9b30fd534a0f43c8b8a0305e86d4e4b7/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-03 00:00:28.173387 | 2025-09-03 00:00:28.173484 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-03 00:00:28.196793 | orchestrator | skipping: Conditional result was False 2025-09-03 00:00:28.219162 | 2025-09-03 00:00:28.219268 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-03 00:00:28.841871 | orchestrator | changed 2025-09-03 00:00:28.851800 | 2025-09-03 00:00:28.851897 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-03 00:00:29.163815 | orchestrator | ok 2025-09-03 00:00:29.175100 | 2025-09-03 00:00:29.175198 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-03 00:00:29.646071 | orchestrator | ok 2025-09-03 00:00:29.657780 | 2025-09-03 00:00:29.657878 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-03 00:00:30.085688 | orchestrator | ok 2025-09-03 00:00:30.098278 | 2025-09-03 00:00:30.098375 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-03 00:00:30.121328 | orchestrator | skipping: Conditional result was False 2025-09-03 00:00:30.126900 | 2025-09-03 00:00:30.126988 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-03 00:00:31.002569 | orchestrator -> localhost | changed 2025-09-03 00:00:31.014123 | 2025-09-03 00:00:31.014221 | TASK [add-build-sshkey : Add back temp key] 2025-09-03 00:00:31.736235 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/9b30fd534a0f43c8b8a0305e86d4e4b7/work/9b30fd534a0f43c8b8a0305e86d4e4b7_id_rsa (zuul-build-sshkey) 2025-09-03 00:00:31.736416 | orchestrator -> localhost | ok: Runtime: 0:00:00.009247 2025-09-03 00:00:31.743390 | 2025-09-03 00:00:31.743472 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-03 00:00:32.319665 | orchestrator | ok 2025-09-03 00:00:32.324614 | 2025-09-03 00:00:32.324698 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-03 00:00:32.351277 | orchestrator | skipping: Conditional result was False 2025-09-03 00:00:32.443887 | 2025-09-03 00:00:32.443985 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-03 00:00:32.952380 | orchestrator | ok 2025-09-03 00:00:32.972552 | 2025-09-03 00:00:32.972655 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-03 00:00:33.027158 | orchestrator | ok 2025-09-03 00:00:33.039409 | 2025-09-03 00:00:33.039511 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-03 00:00:33.500481 | orchestrator -> localhost | ok 2025-09-03 00:00:33.506403 | 2025-09-03 00:00:33.506489 | TASK [validate-host : Collect information about the host] 2025-09-03 00:00:34.903554 | orchestrator | ok 2025-09-03 00:00:34.949106 | 2025-09-03 00:00:34.949236 | TASK [validate-host : Sanitize hostname] 2025-09-03 00:00:35.042397 | orchestrator | ok 2025-09-03 00:00:35.055417 | 2025-09-03 00:00:35.055549 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-03 00:00:36.074480 | orchestrator -> localhost | changed 2025-09-03 00:00:36.080481 | 2025-09-03 00:00:36.080585 | TASK [validate-host : Collect information about zuul worker] 2025-09-03 00:00:36.665596 | orchestrator | ok 2025-09-03 00:00:36.670622 | 2025-09-03 00:00:36.670713 | TASK [validate-host : Write out all zuul information for each host] 2025-09-03 00:00:38.266146 | orchestrator -> localhost | changed 2025-09-03 00:00:38.277650 | 2025-09-03 00:00:38.277751 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-03 00:00:38.591841 | orchestrator | ok 2025-09-03 00:00:38.597676 | 2025-09-03 00:00:38.597768 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-03 00:01:19.978138 | orchestrator | changed: 2025-09-03 00:01:19.978355 | orchestrator | .d..t...... src/ 2025-09-03 00:01:19.978391 | orchestrator | .d..t...... src/github.com/ 2025-09-03 00:01:19.978415 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-03 00:01:19.978438 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-03 00:01:19.978459 | orchestrator | RedHat.yml 2025-09-03 00:01:20.001848 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-03 00:01:20.001866 | orchestrator | RedHat.yml 2025-09-03 00:01:20.001920 | orchestrator | = 1.53.0"... 2025-09-03 00:01:30.875324 | orchestrator | 00:01:30.875 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-09-03 00:01:31.219116 | orchestrator | 00:01:31.218 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-03 00:01:31.765290 | orchestrator | 00:01:31.765 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-03 00:01:32.447514 | orchestrator | 00:01:32.447 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-03 00:01:33.342784 | orchestrator | 00:01:33.342 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-03 00:01:33.413240 | orchestrator | 00:01:33.413 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-03 00:01:33.899604 | orchestrator | 00:01:33.899 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-03 00:01:33.899683 | orchestrator | 00:01:33.899 STDOUT terraform: Providers are signed by their developers. 2025-09-03 00:01:33.899876 | orchestrator | 00:01:33.899 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-03 00:01:33.899933 | orchestrator | 00:01:33.899 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-03 00:01:33.901483 | orchestrator | 00:01:33.899 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-03 00:01:33.901597 | orchestrator | 00:01:33.900 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-03 00:01:33.901619 | orchestrator | 00:01:33.900 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-03 00:01:33.901632 | orchestrator | 00:01:33.900 STDOUT terraform: you run "tofu init" in the future. 2025-09-03 00:01:33.901645 | orchestrator | 00:01:33.900 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-03 00:01:33.901658 | orchestrator | 00:01:33.900 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-03 00:01:33.901670 | orchestrator | 00:01:33.900 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-03 00:01:33.901681 | orchestrator | 00:01:33.900 STDOUT terraform: should now work. 2025-09-03 00:01:33.901693 | orchestrator | 00:01:33.900 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-03 00:01:33.901705 | orchestrator | 00:01:33.900 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-03 00:01:33.901717 | orchestrator | 00:01:33.901 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-03 00:01:34.045293 | orchestrator | 00:01:34.045 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-09-03 00:01:34.045380 | orchestrator | 00:01:34.045 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-09-03 00:01:34.270752 | orchestrator | 00:01:34.270 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-03 00:01:34.270871 | orchestrator | 00:01:34.270 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-03 00:01:34.270886 | orchestrator | 00:01:34.270 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-03 00:01:34.270892 | orchestrator | 00:01:34.270 STDOUT terraform: for this configuration. 2025-09-03 00:01:34.433199 | orchestrator | 00:01:34.432 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-09-03 00:01:34.433289 | orchestrator | 00:01:34.433 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-09-03 00:01:34.567194 | orchestrator | 00:01:34.566 STDOUT terraform: ci.auto.tfvars 2025-09-03 00:01:34.574093 | orchestrator | 00:01:34.573 STDOUT terraform: default_custom.tf 2025-09-03 00:01:34.757137 | orchestrator | 00:01:34.755 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-09-03 00:01:36.036112 | orchestrator | 00:01:36.034 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-03 00:01:36.598300 | orchestrator | 00:01:36.598 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-03 00:01:36.876636 | orchestrator | 00:01:36.876 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-03 00:01:36.876711 | orchestrator | 00:01:36.876 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-03 00:01:36.876718 | orchestrator | 00:01:36.876 STDOUT terraform:  + create 2025-09-03 00:01:36.876724 | orchestrator | 00:01:36.876 STDOUT terraform:  <= read (data resources) 2025-09-03 00:01:36.876747 | orchestrator | 00:01:36.876 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-03 00:01:36.876840 | orchestrator | 00:01:36.876 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-03 00:01:36.876897 | orchestrator | 00:01:36.876 STDOUT terraform:  # (config refers to values not yet known) 2025-09-03 00:01:36.876948 | orchestrator | 00:01:36.876 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-03 00:01:36.876994 | orchestrator | 00:01:36.876 STDOUT terraform:  + checksum = (known after apply) 2025-09-03 00:01:36.877043 | orchestrator | 00:01:36.876 STDOUT terraform:  + created_at = (known after apply) 2025-09-03 00:01:36.877092 | orchestrator | 00:01:36.877 STDOUT terraform:  + file = (known after apply) 2025-09-03 00:01:36.877139 | orchestrator | 00:01:36.877 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.877186 | orchestrator | 00:01:36.877 STDOUT terraform:  + metadata = (known after apply) 2025-09-03 00:01:36.877230 | orchestrator | 00:01:36.877 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-03 00:01:36.877279 | orchestrator | 00:01:36.877 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-03 00:01:36.877316 | orchestrator | 00:01:36.877 STDOUT terraform:  + most_recent = true 2025-09-03 00:01:36.877366 | orchestrator | 00:01:36.877 STDOUT terraform:  + name = (known after apply) 2025-09-03 00:01:36.877411 | orchestrator | 00:01:36.877 STDOUT terraform:  + protected = (known after apply) 2025-09-03 00:01:36.877501 | orchestrator | 00:01:36.877 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.877543 | orchestrator | 00:01:36.877 STDOUT terraform:  + schema = (known after apply) 2025-09-03 00:01:36.877679 | orchestrator | 00:01:36.877 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-03 00:01:36.877708 | orchestrator | 00:01:36.877 STDOUT terraform:  + tags = (known after apply) 2025-09-03 00:01:36.877763 | orchestrator | 00:01:36.877 STDOUT terraform:  + updated_at = (known after apply) 2025-09-03 00:01:36.877803 | orchestrator | 00:01:36.877 STDOUT terraform:  } 2025-09-03 00:01:36.877811 | orchestrator | 00:01:36.877 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-03 00:01:36.877818 | orchestrator | 00:01:36.877 STDOUT terraform:  # (config refers to values not yet known) 2025-09-03 00:01:36.877822 | orchestrator | 00:01:36.877 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-03 00:01:36.877858 | orchestrator | 00:01:36.877 STDOUT terraform:  + checksum = (known after apply) 2025-09-03 00:01:36.877896 | orchestrator | 00:01:36.877 STDOUT terraform:  + created_at = (known after apply) 2025-09-03 00:01:36.877998 | orchestrator | 00:01:36.877 STDOUT terraform:  + file = (known after apply) 2025-09-03 00:01:36.878029 | orchestrator | 00:01:36.877 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.878035 | orchestrator | 00:01:36.877 STDOUT terraform:  + metadata = (known after apply) 2025-09-03 00:01:36.878041 | orchestrator | 00:01:36.877 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-03 00:01:36.878156 | orchestrator | 00:01:36.878 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-03 00:01:36.878177 | orchestrator | 00:01:36.878 STDOUT terraform:  + most_recent = true 2025-09-03 00:01:36.878181 | orchestrator | 00:01:36.878 STDOUT terraform:  + name = (known after apply) 2025-09-03 00:01:36.878187 | orchestrator | 00:01:36.878 STDOUT terraform:  + protected = (known after apply) 2025-09-03 00:01:36.878284 | orchestrator | 00:01:36.878 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.878359 | orchestrator | 00:01:36.878 STDOUT terraform:  + schema = (known after apply) 2025-09-03 00:01:36.878365 | orchestrator | 00:01:36.878 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-03 00:01:36.878370 | orchestrator | 00:01:36.878 STDOUT terraform:  + tags = (known after apply) 2025-09-03 00:01:36.878376 | orchestrator | 00:01:36.878 STDOUT terraform:  + updated_at = (known after apply) 2025-09-03 00:01:36.878406 | orchestrator | 00:01:36.878 STDOUT terraform:  } 2025-09-03 00:01:36.878496 | orchestrator | 00:01:36.878 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-03 00:01:36.878509 | orchestrator | 00:01:36.878 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-03 00:01:36.878555 | orchestrator | 00:01:36.878 STDOUT terraform:  + content = (known after apply) 2025-09-03 00:01:36.878592 | orchestrator | 00:01:36.878 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-03 00:01:36.878643 | orchestrator | 00:01:36.878 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-03 00:01:36.878751 | orchestrator | 00:01:36.878 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-03 00:01:36.878764 | orchestrator | 00:01:36.878 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-03 00:01:36.878769 | orchestrator | 00:01:36.878 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-03 00:01:36.878828 | orchestrator | 00:01:36.878 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-03 00:01:36.878837 | orchestrator | 00:01:36.878 STDOUT terraform:  + directory_permission = "0777" 2025-09-03 00:01:36.878937 | orchestrator | 00:01:36.878 STDOUT terraform:  + file_permission = "0644" 2025-09-03 00:01:36.878950 | orchestrator | 00:01:36.878 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-03 00:01:36.878956 | orchestrator | 00:01:36.878 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.879028 | orchestrator | 00:01:36.878 STDOUT terraform:  } 2025-09-03 00:01:36.879040 | orchestrator | 00:01:36.878 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-03 00:01:36.879088 | orchestrator | 00:01:36.879 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-03 00:01:36.879236 | orchestrator | 00:01:36.879 STDOUT terraform:  + content = (known after apply) 2025-09-03 00:01:36.879242 | orchestrator | 00:01:36.879 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-03 00:01:36.879246 | orchestrator | 00:01:36.879 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-03 00:01:36.879251 | orchestrator | 00:01:36.879 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-03 00:01:36.879313 | orchestrator | 00:01:36.879 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-03 00:01:36.879345 | orchestrator | 00:01:36.879 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-03 00:01:36.879412 | orchestrator | 00:01:36.879 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-03 00:01:36.879437 | orchestrator | 00:01:36.879 STDOUT terraform:  + directory_permission = "0777" 2025-09-03 00:01:36.879491 | orchestrator | 00:01:36.879 STDOUT terraform:  + file_permission = "0644" 2025-09-03 00:01:36.879522 | orchestrator | 00:01:36.879 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-03 00:01:36.879626 | orchestrator | 00:01:36.879 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.879631 | orchestrator | 00:01:36.879 STDOUT terraform:  } 2025-09-03 00:01:36.879640 | orchestrator | 00:01:36.879 STDOUT terraform:  # local_file.inventory will be created 2025-09-03 00:01:36.879646 | orchestrator | 00:01:36.879 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-03 00:01:36.879700 | orchestrator | 00:01:36.879 STDOUT terraform:  + content = (known after apply) 2025-09-03 00:01:36.879724 | orchestrator | 00:01:36.879 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-03 00:01:36.879779 | orchestrator | 00:01:36.879 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-03 00:01:36.879830 | orchestrator | 00:01:36.879 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-03 00:01:36.879909 | orchestrator | 00:01:36.879 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-03 00:01:36.879915 | orchestrator | 00:01:36.879 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-03 00:01:36.879969 | orchestrator | 00:01:36.879 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-03 00:01:36.879982 | orchestrator | 00:01:36.879 STDOUT terraform:  + directory_permission = "0777" 2025-09-03 00:01:36.880037 | orchestrator | 00:01:36.879 STDOUT terraform:  + file_permission = "0644" 2025-09-03 00:01:36.880049 | orchestrator | 00:01:36.880 STDOUT terraform:  + filename = "inventory.ci" 2025-09-03 00:01:36.880148 | orchestrator | 00:01:36.880 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.880159 | orchestrator | 00:01:36.880 STDOUT terraform:  } 2025-09-03 00:01:36.880163 | orchestrator | 00:01:36.880 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-03 00:01:36.880228 | orchestrator | 00:01:36.880 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-03 00:01:36.880235 | orchestrator | 00:01:36.880 STDOUT terraform:  + content = (sensitive value) 2025-09-03 00:01:36.880346 | orchestrator | 00:01:36.880 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-03 00:01:36.880351 | orchestrator | 00:01:36.880 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-03 00:01:36.880357 | orchestrator | 00:01:36.880 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-03 00:01:36.880480 | orchestrator | 00:01:36.880 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-03 00:01:36.880493 | orchestrator | 00:01:36.880 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-03 00:01:36.880499 | orchestrator | 00:01:36.880 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-03 00:01:36.880555 | orchestrator | 00:01:36.880 STDOUT terraform:  + directory_permission = "0700" 2025-09-03 00:01:36.880562 | orchestrator | 00:01:36.880 STDOUT terraform:  + file_permission = "0600" 2025-09-03 00:01:36.880628 | orchestrator | 00:01:36.880 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-03 00:01:36.880640 | orchestrator | 00:01:36.880 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.880666 | orchestrator | 00:01:36.880 STDOUT terraform:  } 2025-09-03 00:01:36.880710 | orchestrator | 00:01:36.880 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-03 00:01:36.880816 | orchestrator | 00:01:36.880 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-03 00:01:36.880826 | orchestrator | 00:01:36.880 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.880832 | orchestrator | 00:01:36.880 STDOUT terraform:  } 2025-09-03 00:01:36.880838 | orchestrator | 00:01:36.880 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-03 00:01:36.880946 | orchestrator | 00:01:36.880 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-03 00:01:36.880953 | orchestrator | 00:01:36.880 STDOUT terraform:  + attachment = (known after apply) 2025-09-03 00:01:36.881085 | orchestrator | 00:01:36.880 STDOUT terraform:  + availability_zone = "nova" 2025-09-03 00:01:36.881090 | orchestrator | 00:01:36.880 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.881094 | orchestrator | 00:01:36.881 STDOUT terraform:  + image_id = (known after apply) 2025-09-03 00:01:36.881178 | orchestrator | 00:01:36.881 STDOUT terraform:  + metadata = (known after apply) 2025-09-03 00:01:36.881184 | orchestrator | 00:01:36.881 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-03 00:01:36.881260 | orchestrator | 00:01:36.881 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.881266 | orchestrator | 00:01:36.881 STDOUT terraform:  + size = 80 2025-09-03 00:01:36.881272 | orchestrator | 00:01:36.881 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-03 00:01:36.881341 | orchestrator | 00:01:36.881 STDOUT terraform:  + volume_type = "ssd" 2025-09-03 00:01:36.881347 | orchestrator | 00:01:36.881 STDOUT terraform:  } 2025-09-03 00:01:36.881444 | orchestrator | 00:01:36.881 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-03 00:01:36.881477 | orchestrator | 00:01:36.881 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-03 00:01:36.881537 | orchestrator | 00:01:36.881 STDOUT terraform:  + attachment = (known after apply) 2025-09-03 00:01:36.881597 | orchestrator | 00:01:36.881 STDOUT terraform:  + availability_zone = "nova" 2025-09-03 00:01:36.881605 | orchestrator | 00:01:36.881 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.881705 | orchestrator | 00:01:36.881 STDOUT terraform:  + image_id = (known after apply) 2025-09-03 00:01:36.881710 | orchestrator | 00:01:36.881 STDOUT terraform:  + metadata = (known after apply) 2025-09-03 00:01:36.881781 | orchestrator | 00:01:36.881 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-03 00:01:36.881841 | orchestrator | 00:01:36.881 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.881847 | orchestrator | 00:01:36.881 STDOUT terraform:  + size = 80 2025-09-03 00:01:36.881853 | orchestrator | 00:01:36.881 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-03 00:01:36.881921 | orchestrator | 00:01:36.881 STDOUT terraform:  + volume_type = "ssd" 2025-09-03 00:01:36.881927 | orchestrator | 00:01:36.881 STDOUT terraform:  } 2025-09-03 00:01:36.882545 | orchestrator | 00:01:36.882 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-03 00:01:36.882816 | orchestrator | 00:01:36.882 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-03 00:01:36.882908 | orchestrator | 00:01:36.882 STDOUT terraform:  + attachment = (known after apply) 2025-09-03 00:01:36.882991 | orchestrator | 00:01:36.882 STDOUT terraform:  + availability_zone = "nova" 2025-09-03 00:01:36.883061 | orchestrator | 00:01:36.882 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.883152 | orchestrator | 00:01:36.883 STDOUT terraform:  + image_id = (known after apply) 2025-09-03 00:01:36.883272 | orchestrator | 00:01:36.883 STDOUT terraform:  + metadata = (known after apply) 2025-09-03 00:01:36.883405 | orchestrator | 00:01:36.883 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-03 00:01:36.883513 | orchestrator | 00:01:36.883 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.883529 | orchestrator | 00:01:36.883 STDOUT terraform:  + size = 80 2025-09-03 00:01:36.883638 | orchestrator | 00:01:36.883 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-03 00:01:36.883711 | orchestrator | 00:01:36.883 STDOUT terraform:  + volume_type = "ssd" 2025-09-03 00:01:36.883757 | orchestrator | 00:01:36.883 STDOUT terraform:  } 2025-09-03 00:01:36.884107 | orchestrator | 00:01:36.883 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-03 00:01:36.884215 | orchestrator | 00:01:36.884 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-03 00:01:36.884252 | orchestrator | 00:01:36.884 STDOUT terraform:  + attachment = (known after apply) 2025-09-03 00:01:36.884391 | orchestrator | 00:01:36.884 STDOUT terraform:  + availability_zone = "nova" 2025-09-03 00:01:36.884498 | orchestrator | 00:01:36.884 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.884610 | orchestrator | 00:01:36.884 STDOUT terraform:  + image_id = (known after apply) 2025-09-03 00:01:36.884824 | orchestrator | 00:01:36.884 STDOUT terraform:  + metadata = (known after apply) 2025-09-03 00:01:36.885182 | orchestrator | 00:01:36.884 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-03 00:01:36.885190 | orchestrator | 00:01:36.885 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.885360 | orchestrator | 00:01:36.885 STDOUT terraform:  + size = 80 2025-09-03 00:01:36.885414 | orchestrator | 00:01:36.885 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-03 00:01:36.885498 | orchestrator | 00:01:36.885 STDOUT terraform:  + volume_type = "ssd" 2025-09-03 00:01:36.885656 | orchestrator | 00:01:36.885 STDOUT terraform:  } 2025-09-03 00:01:36.888144 | orchestrator | 00:01:36.888 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-03 00:01:36.888299 | orchestrator | 00:01:36.888 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-03 00:01:36.888472 | orchestrator | 00:01:36.888 STDOUT terraform:  + attachment = (known after apply) 2025-09-03 00:01:36.888511 | orchestrator | 00:01:36.888 STDOUT terraform:  + availability_zone = "nova" 2025-09-03 00:01:36.888714 | orchestrator | 00:01:36.888 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.888964 | orchestrator | 00:01:36.888 STDOUT terraform:  + image_id = (known after apply) 2025-09-03 00:01:36.889120 | orchestrator | 00:01:36.888 STDOUT terraform:  + metadata = (known after apply) 2025-09-03 00:01:36.889304 | orchestrator | 00:01:36.889 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-03 00:01:36.889466 | orchestrator | 00:01:36.889 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.889472 | orchestrator | 00:01:36.889 STDOUT terraform:  + size = 80 2025-09-03 00:01:36.889643 | orchestrator | 00:01:36.889 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-03 00:01:36.889726 | orchestrator | 00:01:36.889 STDOUT terraform:  + volume_type = "ssd" 2025-09-03 00:01:36.889805 | orchestrator | 00:01:36.889 STDOUT terraform:  } 2025-09-03 00:01:36.889910 | orchestrator | 00:01:36.889 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-03 00:01:36.890137 | orchestrator | 00:01:36.889 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-03 00:01:36.890145 | orchestrator | 00:01:36.890 STDOUT terraform:  + attachment = (known after apply) 2025-09-03 00:01:36.890208 | orchestrator | 00:01:36.890 STDOUT terraform:  + availability_zone = "nova" 2025-09-03 00:01:36.890294 | orchestrator | 00:01:36.890 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.890435 | orchestrator | 00:01:36.890 STDOUT terraform:  + image_id = (known after apply) 2025-09-03 00:01:36.890479 | orchestrator | 00:01:36.890 STDOUT terraform:  + metadata = (known after apply) 2025-09-03 00:01:36.890671 | orchestrator | 00:01:36.890 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-03 00:01:36.890817 | orchestrator | 00:01:36.890 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.890932 | orchestrator | 00:01:36.890 STDOUT terraform:  + size = 80 2025-09-03 00:01:36.890938 | orchestrator | 00:01:36.890 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-03 00:01:36.890989 | orchestrator | 00:01:36.890 STDOUT terraform:  + volume_type = "ssd" 2025-09-03 00:01:36.891015 | orchestrator | 00:01:36.890 STDOUT terraform:  } 2025-09-03 00:01:36.891201 | orchestrator | 00:01:36.891 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-03 00:01:36.891351 | orchestrator | 00:01:36.891 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-03 00:01:36.891497 | orchestrator | 00:01:36.891 STDOUT terraform:  + attachment = (known after apply) 2025-09-03 00:01:36.891581 | orchestrator | 00:01:36.891 STDOUT terraform:  + availability_zone = "nova" 2025-09-03 00:01:36.891654 | orchestrator | 00:01:36.891 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.891698 | orchestrator | 00:01:36.891 STDOUT terraform:  + image_id = (known after apply) 2025-09-03 00:01:36.891960 | orchestrator | 00:01:36.891 STDOUT terraform:  + metadata = (known after apply) 2025-09-03 00:01:36.892135 | orchestrator | 00:01:36.891 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-03 00:01:36.892298 | orchestrator | 00:01:36.892 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.892364 | orchestrator | 00:01:36.892 STDOUT terraform:  + size = 80 2025-09-03 00:01:36.892550 | orchestrator | 00:01:36.892 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-03 00:01:36.892572 | orchestrator | 00:01:36.892 STDOUT terraform:  + volume_type = "ssd" 2025-09-03 00:01:36.892770 | orchestrator | 00:01:36.892 STDOUT terraform:  } 2025-09-03 00:01:36.892979 | orchestrator | 00:01:36.892 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-03 00:01:36.893140 | orchestrator | 00:01:36.892 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-03 00:01:36.893213 | orchestrator | 00:01:36.893 STDOUT terraform:  + attachment = (known after apply) 2025-09-03 00:01:36.893233 | orchestrator | 00:01:36.893 STDOUT terraform:  + availability_zone = "nova" 2025-09-03 00:01:36.893378 | orchestrator | 00:01:36.893 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.893450 | orchestrator | 00:01:36.893 STDOUT terraform:  + metadata = (known after apply) 2025-09-03 00:01:36.893571 | orchestrator | 00:01:36.893 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-03 00:01:36.893705 | orchestrator | 00:01:36.893 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.893715 | orchestrator | 00:01:36.893 STDOUT terraform:  + size = 20 2025-09-03 00:01:36.893828 | orchestrator | 00:01:36.893 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-03 00:01:36.893836 | orchestrator | 00:01:36.893 STDOUT terraform:  + volume_type = "ssd" 2025-09-03 00:01:36.893842 | orchestrator | 00:01:36.893 STDOUT terraform:  } 2025-09-03 00:01:36.893942 | orchestrator | 00:01:36.893 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-03 00:01:36.894083 | orchestrator | 00:01:36.893 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-03 00:01:36.894141 | orchestrator | 00:01:36.894 STDOUT terraform:  + attachment = (known after apply) 2025-09-03 00:01:36.894228 | orchestrator | 00:01:36.894 STDOUT terraform:  + availability_zone = "nova" 2025-09-03 00:01:36.894318 | orchestrator | 00:01:36.894 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.894364 | orchestrator | 00:01:36.894 STDOUT terraform:  + metadata = (known after apply) 2025-09-03 00:01:36.894518 | orchestrator | 00:01:36.894 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-03 00:01:36.894593 | orchestrator | 00:01:36.894 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.894656 | orchestrator | 00:01:36.894 STDOUT terraform:  + size = 20 2025-09-03 00:01:36.894662 | orchestrator | 00:01:36.894 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-03 00:01:36.894763 | orchestrator | 00:01:36.894 STDOUT terraform:  + volume_type = "ssd" 2025-09-03 00:01:36.894804 | orchestrator | 00:01:36.894 STDOUT terraform:  } 2025-09-03 00:01:36.894823 | orchestrator | 00:01:36.894 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-03 00:01:36.894934 | orchestrator | 00:01:36.894 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-03 00:01:36.895026 | orchestrator | 00:01:36.894 STDOUT terraform:  + attachment = (known after apply) 2025-09-03 00:01:36.895134 | orchestrator | 00:01:36.895 STDOUT terraform:  + availability_zone = "nova" 2025-09-03 00:01:36.895181 | orchestrator | 00:01:36.895 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.895224 | orchestrator | 00:01:36.895 STDOUT terraform:  + metadata = (known after apply) 2025-09-03 00:01:36.895340 | orchestrator | 00:01:36.895 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-03 00:01:36.895418 | orchestrator | 00:01:36.895 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.895478 | orchestrator | 00:01:36.895 STDOUT terraform:  + size = 20 2025-09-03 00:01:36.895601 | orchestrator | 00:01:36.895 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-03 00:01:36.895657 | orchestrator | 00:01:36.895 STDOUT terraform:  + volume_type = "ssd" 2025-09-03 00:01:36.895662 | orchestrator | 00:01:36.895 STDOUT terraform:  } 2025-09-03 00:01:36.895812 | orchestrator | 00:01:36.895 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-03 00:01:36.896015 | orchestrator | 00:01:36.895 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-03 00:01:36.896093 | orchestrator | 00:01:36.895 STDOUT terraform:  + attachment = (known after apply) 2025-09-03 00:01:36.896100 | orchestrator | 00:01:36.896 STDOUT terraform:  + availability_zone = "nova" 2025-09-03 00:01:36.896296 | orchestrator | 00:01:36.896 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.896348 | orchestrator | 00:01:36.896 STDOUT terraform:  + metadata = (known after apply) 2025-09-03 00:01:36.896415 | orchestrator | 00:01:36.896 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-03 00:01:36.896542 | orchestrator | 00:01:36.896 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.896707 | orchestrator | 00:01:36.896 STDOUT terraform:  + size = 20 2025-09-03 00:01:36.896820 | orchestrator | 00:01:36.896 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-03 00:01:36.896826 | orchestrator | 00:01:36.896 STDOUT terraform:  + volume_type = "ssd" 2025-09-03 00:01:36.896843 | orchestrator | 00:01:36.896 STDOUT terraform:  } 2025-09-03 00:01:36.897007 | orchestrator | 00:01:36.896 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-03 00:01:36.897100 | orchestrator | 00:01:36.896 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-03 00:01:36.897335 | orchestrator | 00:01:36.897 STDOUT terraform:  + attachment = (known after apply) 2025-09-03 00:01:36.897462 | orchestrator | 00:01:36.897 STDOUT terraform:  + availability_zone = "nova" 2025-09-03 00:01:36.897468 | orchestrator | 00:01:36.897 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.897507 | orchestrator | 00:01:36.897 STDOUT terraform:  + metadata = (known after apply) 2025-09-03 00:01:36.897697 | orchestrator | 00:01:36.897 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-03 00:01:36.897761 | orchestrator | 00:01:36.897 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.897844 | orchestrator | 00:01:36.897 STDOUT terraform:  + size = 20 2025-09-03 00:01:36.897899 | orchestrator | 00:01:36.897 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-03 00:01:36.897992 | orchestrator | 00:01:36.897 STDOUT terraform:  + volume_type = "ssd" 2025-09-03 00:01:36.898045 | orchestrator | 00:01:36.897 STDOUT terraform:  } 2025-09-03 00:01:36.898115 | orchestrator | 00:01:36.898 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-03 00:01:36.898242 | orchestrator | 00:01:36.898 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-03 00:01:36.898320 | orchestrator | 00:01:36.898 STDOUT terraform:  + attachment = (known after apply) 2025-09-03 00:01:36.898487 | orchestrator | 00:01:36.898 STDOUT terraform:  + availability_zone = "nova" 2025-09-03 00:01:36.898657 | orchestrator | 00:01:36.898 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.898774 | orchestrator | 00:01:36.898 STDOUT terraform:  + metadata = (known after apply) 2025-09-03 00:01:36.898822 | orchestrator | 00:01:36.898 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-03 00:01:36.898974 | orchestrator | 00:01:36.898 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.899126 | orchestrator | 00:01:36.898 STDOUT terraform:  + size = 20 2025-09-03 00:01:36.899132 | orchestrator | 00:01:36.899 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-03 00:01:36.899140 | orchestrator | 00:01:36.899 STDOUT terraform:  + volume_type = "ssd" 2025-09-03 00:01:36.899261 | orchestrator | 00:01:36.899 STDOUT terraform:  } 2025-09-03 00:01:36.899739 | orchestrator | 00:01:36.899 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-03 00:01:36.899753 | orchestrator | 00:01:36.899 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-03 00:01:36.899902 | orchestrator | 00:01:36.899 STDOUT terraform:  + attachment = (known after apply) 2025-09-03 00:01:36.899976 | orchestrator | 00:01:36.899 STDOUT terraform:  + availability_zone = "nova" 2025-09-03 00:01:36.900053 | orchestrator | 00:01:36.899 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.900198 | orchestrator | 00:01:36.900 STDOUT terraform:  + metadata = (known after apply) 2025-09-03 00:01:36.900205 | orchestrator | 00:01:36.900 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-03 00:01:36.900381 | orchestrator | 00:01:36.900 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.900482 | orchestrator | 00:01:36.900 STDOUT terraform:  + size = 20 2025-09-03 00:01:36.900525 | orchestrator | 00:01:36.900 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-03 00:01:36.900564 | orchestrator | 00:01:36.900 STDOUT terraform:  + volume_type = "ssd" 2025-09-03 00:01:36.900571 | orchestrator | 00:01:36.900 STDOUT terraform:  } 2025-09-03 00:01:36.900695 | orchestrator | 00:01:36.900 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-03 00:01:36.900753 | orchestrator | 00:01:36.900 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-03 00:01:36.900824 | orchestrator | 00:01:36.900 STDOUT terraform:  + attachment = (known after apply) 2025-09-03 00:01:36.900959 | orchestrator | 00:01:36.900 STDOUT terraform:  + availability_zone = "nova" 2025-09-03 00:01:36.901044 | orchestrator | 00:01:36.900 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.901159 | orchestrator | 00:01:36.901 STDOUT terraform:  + metadata = (known after apply) 2025-09-03 00:01:36.901211 | orchestrator | 00:01:36.901 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-03 00:01:36.901302 | orchestrator | 00:01:36.901 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.901393 | orchestrator | 00:01:36.901 STDOUT terraform:  + size = 20 2025-09-03 00:01:36.901471 | orchestrator | 00:01:36.901 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-03 00:01:36.901510 | orchestrator | 00:01:36.901 STDOUT terraform:  + volume_type = "ssd" 2025-09-03 00:01:36.901534 | orchestrator | 00:01:36.901 STDOUT terraform:  } 2025-09-03 00:01:36.901738 | orchestrator | 00:01:36.901 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-03 00:01:36.901959 | orchestrator | 00:01:36.901 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-03 00:01:36.902145 | orchestrator | 00:01:36.901 STDOUT terraform:  + attachment = (known after apply) 2025-09-03 00:01:36.902167 | orchestrator | 00:01:36.902 STDOUT terraform:  + availability_zone = "nova" 2025-09-03 00:01:36.902234 | orchestrator | 00:01:36.902 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.902356 | orchestrator | 00:01:36.902 STDOUT terraform:  + metadata = (known after apply) 2025-09-03 00:01:36.902431 | orchestrator | 00:01:36.902 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-03 00:01:36.902548 | orchestrator | 00:01:36.902 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.902611 | orchestrator | 00:01:36.902 STDOUT terraform:  + size = 20 2025-09-03 00:01:36.902664 | orchestrator | 00:01:36.902 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-03 00:01:36.902805 | orchestrator | 00:01:36.902 STDOUT terraform:  + volume_type = "ssd" 2025-09-03 00:01:36.902812 | orchestrator | 00:01:36.902 STDOUT terraform:  } 2025-09-03 00:01:36.903002 | orchestrator | 00:01:36.902 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-03 00:01:36.903022 | orchestrator | 00:01:36.902 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-03 00:01:36.903074 | orchestrator | 00:01:36.902 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-03 00:01:36.910187 | orchestrator | 00:01:36.903 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-03 00:01:36.910304 | orchestrator | 00:01:36.910 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-03 00:01:36.910353 | orchestrator | 00:01:36.910 STDOUT terraform:  + all_tags = (known after apply) 2025-09-03 00:01:36.910388 | orchestrator | 00:01:36.910 STDOUT terraform:  + availability_zone = "nova" 2025-09-03 00:01:36.910440 | orchestrator | 00:01:36.910 STDOUT terraform:  + config_drive = true 2025-09-03 00:01:36.910485 | orchestrator | 00:01:36.910 STDOUT terraform:  + created = (known after apply) 2025-09-03 00:01:36.910538 | orchestrator | 00:01:36.910 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-03 00:01:36.910576 | orchestrator | 00:01:36.910 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-03 00:01:36.910608 | orchestrator | 00:01:36.910 STDOUT terraform:  + force_delete = false 2025-09-03 00:01:36.910649 | orchestrator | 00:01:36.910 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-03 00:01:36.910693 | orchestrator | 00:01:36.910 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.910736 | orchestrator | 00:01:36.910 STDOUT terraform:  + image_id = (known after apply) 2025-09-03 00:01:36.910777 | orchestrator | 00:01:36.910 STDOUT terraform:  + image_name = (known after apply) 2025-09-03 00:01:36.910809 | orchestrator | 00:01:36.910 STDOUT terraform:  + key_pair = "testbed" 2025-09-03 00:01:36.910846 | orchestrator | 00:01:36.910 STDOUT terraform:  + name = "testbed-manager" 2025-09-03 00:01:36.910880 | orchestrator | 00:01:36.910 STDOUT terraform:  + power_state = "active" 2025-09-03 00:01:36.910920 | orchestrator | 00:01:36.910 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.910963 | orchestrator | 00:01:36.910 STDOUT terraform:  + security_groups = (known after apply) 2025-09-03 00:01:36.910995 | orchestrator | 00:01:36.910 STDOUT terraform:  + stop_before_destroy = false 2025-09-03 00:01:36.911036 | orchestrator | 00:01:36.911 STDOUT terraform:  + updated = (known after apply) 2025-09-03 00:01:36.911074 | orchestrator | 00:01:36.911 STDOUT terraform:  + user_data = (sensitive value) 2025-09-03 00:01:36.911097 | orchestrator | 00:01:36.911 STDOUT terraform:  + block_device { 2025-09-03 00:01:36.911128 | orchestrator | 00:01:36.911 STDOUT terraform:  + boot_index = 0 2025-09-03 00:01:36.911163 | orchestrator | 00:01:36.911 STDOUT terraform:  + delete_on_termination = false 2025-09-03 00:01:36.911200 | orchestrator | 00:01:36.911 STDOUT terraform:  + destination_type = "volume" 2025-09-03 00:01:36.911234 | orchestrator | 00:01:36.911 STDOUT terraform:  + multiattach = false 2025-09-03 00:01:36.911283 | orchestrator | 00:01:36.911 STDOUT terraform:  + source_type = "volume" 2025-09-03 00:01:36.911326 | orchestrator | 00:01:36.911 STDOUT terraform:  + uuid = (known after apply) 2025-09-03 00:01:36.911347 | orchestrator | 00:01:36.911 STDOUT terraform:  } 2025-09-03 00:01:36.911368 | orchestrator | 00:01:36.911 STDOUT terraform:  + network { 2025-09-03 00:01:36.911397 | orchestrator | 00:01:36.911 STDOUT terraform:  + access_network = false 2025-09-03 00:01:36.911444 | orchestrator | 00:01:36.911 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-03 00:01:36.911482 | orchestrator | 00:01:36.911 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-03 00:01:36.911519 | orchestrator | 00:01:36.911 STDOUT terraform:  + mac = (known after apply) 2025-09-03 00:01:36.911563 | orchestrator | 00:01:36.911 STDOUT terraform:  + name = (known after apply) 2025-09-03 00:01:36.911600 | orchestrator | 00:01:36.911 STDOUT terraform:  + port = (known after apply) 2025-09-03 00:01:36.911637 | orchestrator | 00:01:36.911 STDOUT terraform:  + uuid = (known after apply) 2025-09-03 00:01:36.911658 | orchestrator | 00:01:36.911 STDOUT terraform:  } 2025-09-03 00:01:36.911678 | orchestrator | 00:01:36.911 STDOUT terraform:  } 2025-09-03 00:01:36.911728 | orchestrator | 00:01:36.911 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-03 00:01:36.911783 | orchestrator | 00:01:36.911 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-03 00:01:36.911826 | orchestrator | 00:01:36.911 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-03 00:01:36.911872 | orchestrator | 00:01:36.911 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-03 00:01:36.911915 | orchestrator | 00:01:36.911 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-03 00:01:36.911956 | orchestrator | 00:01:36.911 STDOUT terraform:  + all_tags = (known after apply) 2025-09-03 00:01:36.911985 | orchestrator | 00:01:36.911 STDOUT terraform:  + availability_zone = "nova" 2025-09-03 00:01:36.912014 | orchestrator | 00:01:36.911 STDOUT terraform:  + config_drive = true 2025-09-03 00:01:36.912054 | orchestrator | 00:01:36.912 STDOUT terraform:  + created = (known after apply) 2025-09-03 00:01:36.912095 | orchestrator | 00:01:36.912 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-03 00:01:36.912131 | orchestrator | 00:01:36.912 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-03 00:01:36.912160 | orchestrator | 00:01:36.912 STDOUT terraform:  + force_delete = false 2025-09-03 00:01:36.912200 | orchestrator | 00:01:36.912 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-03 00:01:36.912240 | orchestrator | 00:01:36.912 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.912284 | orchestrator | 00:01:36.912 STDOUT terraform:  + image_id = (known after apply) 2025-09-03 00:01:36.912326 | orchestrator | 00:01:36.912 STDOUT terraform:  + image_name = (known after apply) 2025-09-03 00:01:36.912357 | orchestrator | 00:01:36.912 STDOUT terraform:  + key_pair = "testbed" 2025-09-03 00:01:36.912394 | orchestrator | 00:01:36.912 STDOUT terraform:  + name = "testbed-node-0" 2025-09-03 00:01:36.912438 | orchestrator | 00:01:36.912 STDOUT terraform:  + power_state = "active" 2025-09-03 00:01:36.912480 | orchestrator | 00:01:36.912 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.912521 | orchestrator | 00:01:36.912 STDOUT terraform:  + security_groups = (known after apply) 2025-09-03 00:01:36.912551 | orchestrator | 00:01:36.912 STDOUT terraform:  + stop_before_destroy = false 2025-09-03 00:01:36.912592 | orchestrator | 00:01:36.912 STDOUT terraform:  + updated = (known after apply) 2025-09-03 00:01:36.912649 | orchestrator | 00:01:36.912 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-03 00:01:36.912677 | orchestrator | 00:01:36.912 STDOUT terraform:  + block_device { 2025-09-03 00:01:36.912713 | orchestrator | 00:01:36.912 STDOUT terraform:  + boot_index = 0 2025-09-03 00:01:36.912749 | orchestrator | 00:01:36.912 STDOUT terraform:  + delete_on_termination = false 2025-09-03 00:01:36.912784 | orchestrator | 00:01:36.912 STDOUT terraform:  + destination_type = "volume" 2025-09-03 00:01:36.912818 | orchestrator | 00:01:36.912 STDOUT terraform:  + multiattach = false 2025-09-03 00:01:36.912854 | orchestrator | 00:01:36.912 STDOUT terraform:  + source_type = "volume" 2025-09-03 00:01:36.912899 | orchestrator | 00:01:36.912 STDOUT terraform:  + uuid = (known after apply) 2025-09-03 00:01:36.912919 | orchestrator | 00:01:36.912 STDOUT terraform:  } 2025-09-03 00:01:36.912940 | orchestrator | 00:01:36.912 STDOUT terraform:  + network { 2025-09-03 00:01:36.913004 | orchestrator | 00:01:36.912 STDOUT terraform:  + access_network = false 2025-09-03 00:01:36.913080 | orchestrator | 00:01:36.913 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-03 00:01:36.913348 | orchestrator | 00:01:36.913 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-03 00:01:36.913527 | orchestrator | 00:01:36.913 STDOUT terraform:  + mac = (known after apply) 2025-09-03 00:01:36.913774 | orchestrator | 00:01:36.913 STDOUT terraform:  + name = (known after apply) 2025-09-03 00:01:36.913857 | orchestrator | 00:01:36.913 STDOUT terraform:  + port = (known after apply) 2025-09-03 00:01:36.913942 | orchestrator | 00:01:36.913 STDOUT terraform:  + uuid = (known after apply) 2025-09-03 00:01:36.913964 | orchestrator | 00:01:36.913 STDOUT terraform:  } 2025-09-03 00:01:36.913988 | orchestrator | 00:01:36.913 STDOUT terraform:  } 2025-09-03 00:01:36.914072 | orchestrator | 00:01:36.913 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-03 00:01:36.914132 | orchestrator | 00:01:36.914 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-03 00:01:36.914173 | orchestrator | 00:01:36.914 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-03 00:01:36.914214 | orchestrator | 00:01:36.914 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-03 00:01:36.914254 | orchestrator | 00:01:36.914 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-03 00:01:36.914323 | orchestrator | 00:01:36.914 STDOUT terraform:  + all_tags = (known after apply) 2025-09-03 00:01:36.914359 | orchestrator | 00:01:36.914 STDOUT terraform:  + availability_zone = "nova" 2025-09-03 00:01:36.914497 | orchestrator | 00:01:36.914 STDOUT terraform:  + config_drive = true 2025-09-03 00:01:36.914624 | orchestrator | 00:01:36.914 STDOUT terraform:  + created = (known after apply) 2025-09-03 00:01:36.914739 | orchestrator | 00:01:36.914 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-03 00:01:36.914887 | orchestrator | 00:01:36.914 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-03 00:01:36.915029 | orchestrator | 00:01:36.914 STDOUT terraform:  + force_delete = false 2025-09-03 00:01:36.915198 | orchestrator | 00:01:36.915 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-03 00:01:36.915304 | orchestrator | 00:01:36.915 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.915412 | orchestrator | 00:01:36.915 STDOUT terraform:  + image_id = (known after apply) 2025-09-03 00:01:36.915570 | orchestrator | 00:01:36.915 STDOUT terraform:  + image_name = (known after apply) 2025-09-03 00:01:36.915651 | orchestrator | 00:01:36.915 STDOUT terraform:  + key_pair = "testbed" 2025-09-03 00:01:36.915756 | orchestrator | 00:01:36.915 STDOUT terraform:  + name = "testbed-node-1" 2025-09-03 00:01:36.915847 | orchestrator | 00:01:36.915 STDOUT terraform:  + power_state = "active" 2025-09-03 00:01:36.916044 | orchestrator | 00:01:36.915 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.916234 | orchestrator | 00:01:36.916 STDOUT terraform:  + security_groups = (known after apply) 2025-09-03 00:01:36.916325 | orchestrator | 00:01:36.916 STDOUT terraform:  + stop_before_destroy = false 2025-09-03 00:01:36.916534 | orchestrator | 00:01:36.916 STDOUT terraform:  + updated = (known after apply) 2025-09-03 00:01:36.916681 | orchestrator | 00:01:36.916 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-03 00:01:36.916778 | orchestrator | 00:01:36.916 STDOUT terraform:  + block_device { 2025-09-03 00:01:36.916848 | orchestrator | 00:01:36.916 STDOUT terraform:  + boot_index = 0 2025-09-03 00:01:36.917007 | orchestrator | 00:01:36.916 STDOUT terraform:  + delete_on_termination = false 2025-09-03 00:01:36.917173 | orchestrator | 00:01:36.917 STDOUT terraform:  + destination_type = "volume" 2025-09-03 00:01:36.917268 | orchestrator | 00:01:36.917 STDOUT terraform:  + multiattach = false 2025-09-03 00:01:36.917497 | orchestrator | 00:01:36.917 STDOUT terraform:  + source_type = "volume" 2025-09-03 00:01:36.917675 | orchestrator | 00:01:36.917 STDOUT terraform:  + uuid = (known after apply) 2025-09-03 00:01:36.917734 | orchestrator | 00:01:36.917 STDOUT terraform:  } 2025-09-03 00:01:36.917821 | orchestrator | 00:01:36.917 STDOUT terraform:  + network { 2025-09-03 00:01:36.917935 | orchestrator | 00:01:36.917 STDOUT terraform:  + access_network = false 2025-09-03 00:01:36.918220 | orchestrator | 00:01:36.917 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-03 00:01:36.918355 | orchestrator | 00:01:36.918 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-03 00:01:36.918452 | orchestrator | 00:01:36.918 STDOUT terraform:  + mac = (known after apply) 2025-09-03 00:01:36.918715 | orchestrator | 00:01:36.918 STDOUT terraform:  + name = (known after apply) 2025-09-03 00:01:36.918833 | orchestrator | 00:01:36.918 STDOUT terraform:  + port = (known after apply) 2025-09-03 00:01:36.919120 | orchestrator | 00:01:36.918 STDOUT terraform:  + uuid = (known after apply) 2025-09-03 00:01:36.919235 | orchestrator | 00:01:36.919 STDOUT terraform:  } 2025-09-03 00:01:36.919265 | orchestrator | 00:01:36.919 STDOUT terraform:  } 2025-09-03 00:01:36.919314 | orchestrator | 00:01:36.919 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-03 00:01:36.919362 | orchestrator | 00:01:36.919 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-03 00:01:36.919414 | orchestrator | 00:01:36.919 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-03 00:01:36.919486 | orchestrator | 00:01:36.919 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-03 00:01:36.919607 | orchestrator | 00:01:36.919 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-03 00:01:36.919699 | orchestrator | 00:01:36.919 STDOUT terraform:  + all_tags = (known after apply) 2025-09-03 00:01:36.919770 | orchestrator | 00:01:36.919 STDOUT terraform:  + availability_zone = "nova" 2025-09-03 00:01:36.919846 | orchestrator | 00:01:36.919 STDOUT terraform:  + config_drive = true 2025-09-03 00:01:36.920039 | orchestrator | 00:01:36.919 STDOUT terraform:  + created = (known after apply) 2025-09-03 00:01:36.920104 | orchestrator | 00:01:36.920 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-03 00:01:36.920266 | orchestrator | 00:01:36.920 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-03 00:01:36.920330 | orchestrator | 00:01:36.920 STDOUT terraform:  + force_delete = false 2025-09-03 00:01:36.920588 | orchestrator | 00:01:36.920 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-03 00:01:36.920836 | orchestrator | 00:01:36.920 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.921047 | orchestrator | 00:01:36.920 STDOUT terraform:  + image_id = (known after apply) 2025-09-03 00:01:36.921132 | orchestrator | 00:01:36.921 STDOUT terraform:  + image_name = (known after apply) 2025-09-03 00:01:36.921248 | orchestrator | 00:01:36.921 STDOUT terraform:  + key_pair = "testbed" 2025-09-03 00:01:36.921301 | orchestrator | 00:01:36.921 STDOUT terraform:  + name = "testbed-node-2" 2025-09-03 00:01:36.921376 | orchestrator | 00:01:36.921 STDOUT terraform:  + power_state = "active" 2025-09-03 00:01:36.921539 | orchestrator | 00:01:36.921 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.921680 | orchestrator | 00:01:36.921 STDOUT terraform:  + security_groups = (known after apply) 2025-09-03 00:01:36.921762 | orchestrator | 00:01:36.921 STDOUT terraform:  + stop_before_destroy = false 2025-09-03 00:01:36.921855 | orchestrator | 00:01:36.921 STDOUT terraform:  + updated = (known after apply) 2025-09-03 00:01:36.921991 | orchestrator | 00:01:36.921 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-03 00:01:36.922219 | orchestrator | 00:01:36.922 STDOUT terraform:  + block_device { 2025-09-03 00:01:36.922320 | orchestrator | 00:01:36.922 STDOUT terraform:  + boot_index = 0 2025-09-03 00:01:36.922452 | orchestrator | 00:01:36.922 STDOUT terraform:  + delete_on_termination = false 2025-09-03 00:01:36.922509 | orchestrator | 00:01:36.922 STDOUT terraform:  + destination_type = "volume" 2025-09-03 00:01:36.922716 | orchestrator | 00:01:36.922 STDOUT terraform:  + multiattach = false 2025-09-03 00:01:36.922755 | orchestrator | 00:01:36.922 STDOUT terraform:  + source_type = "volume" 2025-09-03 00:01:36.922811 | orchestrator | 00:01:36.922 STDOUT terraform:  + uuid = (known after apply) 2025-09-03 00:01:36.922896 | orchestrator | 00:01:36.922 STDOUT terraform:  } 2025-09-03 00:01:36.923032 | orchestrator | 00:01:36.922 STDOUT terraform:  + network { 2025-09-03 00:01:36.923257 | orchestrator | 00:01:36.923 STDOUT terraform:  + access_network = false 2025-09-03 00:01:36.923316 | orchestrator | 00:01:36.923 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-03 00:01:36.923513 | orchestrator | 00:01:36.923 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-03 00:01:36.923710 | orchestrator | 00:01:36.923 STDOUT terraform:  + mac = (known after apply) 2025-09-03 00:01:36.923744 | orchestrator | 00:01:36.923 STDOUT terraform:  + name = (known after apply) 2025-09-03 00:01:36.923784 | orchestrator | 00:01:36.923 STDOUT terraform:  + port = (known after apply) 2025-09-03 00:01:36.923824 | orchestrator | 00:01:36.923 STDOUT terraform:  + uuid = (known after apply) 2025-09-03 00:01:36.923847 | orchestrator | 00:01:36.923 STDOUT terraform:  } 2025-09-03 00:01:36.923857 | orchestrator | 00:01:36.923 STDOUT terraform:  } 2025-09-03 00:01:36.923915 | orchestrator | 00:01:36.923 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-03 00:01:36.923966 | orchestrator | 00:01:36.923 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-03 00:01:36.924019 | orchestrator | 00:01:36.923 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-03 00:01:36.924064 | orchestrator | 00:01:36.924 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-03 00:01:36.924108 | orchestrator | 00:01:36.924 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-03 00:01:36.924152 | orchestrator | 00:01:36.924 STDOUT terraform:  + all_tags = (known after apply) 2025-09-03 00:01:36.924184 | orchestrator | 00:01:36.924 STDOUT terraform:  + availability_zone = "nova" 2025-09-03 00:01:36.924209 | orchestrator | 00:01:36.924 STDOUT terraform:  + config_drive = true 2025-09-03 00:01:36.924253 | orchestrator | 00:01:36.924 STDOUT terraform:  + created = (known after apply) 2025-09-03 00:01:36.924294 | orchestrator | 00:01:36.924 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-03 00:01:36.924329 | orchestrator | 00:01:36.924 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-03 00:01:36.924359 | orchestrator | 00:01:36.924 STDOUT terraform:  + force_delete = false 2025-09-03 00:01:36.924402 | orchestrator | 00:01:36.924 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-03 00:01:36.924454 | orchestrator | 00:01:36.924 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.924496 | orchestrator | 00:01:36.924 STDOUT terraform:  + image_id = (known after apply) 2025-09-03 00:01:36.924538 | orchestrator | 00:01:36.924 STDOUT terraform:  + image_name = (known after apply) 2025-09-03 00:01:36.924565 | orchestrator | 00:01:36.924 STDOUT terraform:  + key_pair = "testbed" 2025-09-03 00:01:36.924603 | orchestrator | 00:01:36.924 STDOUT terraform:  + name = "testbed-node-3" 2025-09-03 00:01:36.924631 | orchestrator | 00:01:36.924 STDOUT terraform:  + power_state = "active" 2025-09-03 00:01:36.924673 | orchestrator | 00:01:36.924 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.924714 | orchestrator | 00:01:36.924 STDOUT terraform:  + security_groups = (known after apply) 2025-09-03 00:01:36.924746 | orchestrator | 00:01:36.924 STDOUT terraform:  + stop_before_destroy = false 2025-09-03 00:01:36.924787 | orchestrator | 00:01:36.924 STDOUT terraform:  + updated = (known after apply) 2025-09-03 00:01:36.924849 | orchestrator | 00:01:36.924 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-03 00:01:36.924871 | orchestrator | 00:01:36.924 STDOUT terraform:  + block_device { 2025-09-03 00:01:36.924900 | orchestrator | 00:01:36.924 STDOUT terraform:  + boot_index = 0 2025-09-03 00:01:36.924933 | orchestrator | 00:01:36.924 STDOUT terraform:  + delete_on_termination = false 2025-09-03 00:01:36.924971 | orchestrator | 00:01:36.924 STDOUT terraform:  + destination_type = "volume" 2025-09-03 00:01:36.925004 | orchestrator | 00:01:36.924 STDOUT terraform:  + multiattach = false 2025-09-03 00:01:36.925037 | orchestrator | 00:01:36.924 STDOUT terraform:  + source_type = "volume" 2025-09-03 00:01:36.925080 | orchestrator | 00:01:36.925 STDOUT terraform:  + uuid = (known after apply) 2025-09-03 00:01:36.925087 | orchestrator | 00:01:36.925 STDOUT terraform:  } 2025-09-03 00:01:36.925111 | orchestrator | 00:01:36.925 STDOUT terraform:  + network { 2025-09-03 00:01:36.925134 | orchestrator | 00:01:36.925 STDOUT terraform:  + access_network = false 2025-09-03 00:01:36.925172 | orchestrator | 00:01:36.925 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-03 00:01:36.925208 | orchestrator | 00:01:36.925 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-03 00:01:36.925245 | orchestrator | 00:01:36.925 STDOUT terraform:  + mac = (known after apply) 2025-09-03 00:01:36.925281 | orchestrator | 00:01:36.925 STDOUT terraform:  + name = (known after apply) 2025-09-03 00:01:36.925318 | orchestrator | 00:01:36.925 STDOUT terraform:  + port = (known after apply) 2025-09-03 00:01:36.925357 | orchestrator | 00:01:36.925 STDOUT terraform:  + uuid = (known after apply) 2025-09-03 00:01:36.925364 | orchestrator | 00:01:36.925 STDOUT terraform:  } 2025-09-03 00:01:36.925384 | orchestrator | 00:01:36.925 STDOUT terraform:  } 2025-09-03 00:01:36.925519 | orchestrator | 00:01:36.925 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-03 00:01:36.925569 | orchestrator | 00:01:36.925 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-03 00:01:36.925613 | orchestrator | 00:01:36.925 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-03 00:01:36.925653 | orchestrator | 00:01:36.925 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-03 00:01:36.925695 | orchestrator | 00:01:36.925 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-03 00:01:36.925735 | orchestrator | 00:01:36.925 STDOUT terraform:  + all_tags = (known after apply) 2025-09-03 00:01:36.925764 | orchestrator | 00:01:36.925 STDOUT terraform:  + availability_zone = "nova" 2025-09-03 00:01:36.925790 | orchestrator | 00:01:36.925 STDOUT terraform:  + config_drive = true 2025-09-03 00:01:36.925837 | orchestrator | 00:01:36.925 STDOUT terraform:  + created = (known after apply) 2025-09-03 00:01:36.925875 | orchestrator | 00:01:36.925 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-03 00:01:36.925909 | orchestrator | 00:01:36.925 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-03 00:01:36.925941 | orchestrator | 00:01:36.925 STDOUT terraform:  + force_delete = false 2025-09-03 00:01:36.925980 | orchestrator | 00:01:36.925 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-03 00:01:36.926055 | orchestrator | 00:01:36.925 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.926095 | orchestrator | 00:01:36.926 STDOUT terraform:  + image_id = (known after apply) 2025-09-03 00:01:36.926144 | orchestrator | 00:01:36.926 STDOUT terraform:  + image_name = (known after apply) 2025-09-03 00:01:36.926152 | orchestrator | 00:01:36.926 STDOUT terraform:  + key_pair = "testbed" 2025-09-03 00:01:36.926201 | orchestrator | 00:01:36.926 STDOUT terraform:  + name = "testbed-node-4" 2025-09-03 00:01:36.926226 | orchestrator | 00:01:36.926 STDOUT terraform:  + power_state = "active" 2025-09-03 00:01:36.926269 | orchestrator | 00:01:36.926 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.926310 | orchestrator | 00:01:36.926 STDOUT terraform:  + security_groups = (known after apply) 2025-09-03 00:01:36.926335 | orchestrator | 00:01:36.926 STDOUT terraform:  + stop_before_destroy = false 2025-09-03 00:01:36.926377 | orchestrator | 00:01:36.926 STDOUT terraform:  + updated = (known after apply) 2025-09-03 00:01:36.926461 | orchestrator | 00:01:36.926 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-03 00:01:36.926467 | orchestrator | 00:01:36.926 STDOUT terraform:  + block_device { 2025-09-03 00:01:36.926490 | orchestrator | 00:01:36.926 STDOUT terraform:  + boot_index = 0 2025-09-03 00:01:36.926526 | orchestrator | 00:01:36.926 STDOUT terraform:  + delete_on_termination = false 2025-09-03 00:01:36.926562 | orchestrator | 00:01:36.926 STDOUT terraform:  + destination_type = "volume" 2025-09-03 00:01:36.926596 | orchestrator | 00:01:36.926 STDOUT terraform:  + multiattach = false 2025-09-03 00:01:36.926631 | orchestrator | 00:01:36.926 STDOUT terraform:  + source_type = "volume" 2025-09-03 00:01:36.926679 | orchestrator | 00:01:36.926 STDOUT terraform:  + uuid = (known after apply) 2025-09-03 00:01:36.926684 | orchestrator | 00:01:36.926 STDOUT terraform:  } 2025-09-03 00:01:36.926713 | orchestrator | 00:01:36.926 STDOUT terraform:  + network { 2025-09-03 00:01:36.926720 | orchestrator | 00:01:36.926 STDOUT terraform:  + access_network = false 2025-09-03 00:01:36.926769 | orchestrator | 00:01:36.926 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-03 00:01:36.926807 | orchestrator | 00:01:36.926 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-03 00:01:36.926847 | orchestrator | 00:01:36.926 STDOUT terraform:  + mac = (known after apply) 2025-09-03 00:01:36.926884 | orchestrator | 00:01:36.926 STDOUT terraform:  + name = (known after apply) 2025-09-03 00:01:36.926922 | orchestrator | 00:01:36.926 STDOUT terraform:  + port = (known after apply) 2025-09-03 00:01:36.926959 | orchestrator | 00:01:36.926 STDOUT terraform:  + uuid = (known after apply) 2025-09-03 00:01:36.926964 | orchestrator | 00:01:36.926 STDOUT terraform:  } 2025-09-03 00:01:36.926970 | orchestrator | 00:01:36.926 STDOUT terraform:  } 2025-09-03 00:01:36.927034 | orchestrator | 00:01:36.926 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-03 00:01:36.927083 | orchestrator | 00:01:36.927 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-03 00:01:36.927128 | orchestrator | 00:01:36.927 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-03 00:01:36.927169 | orchestrator | 00:01:36.927 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-03 00:01:36.927210 | orchestrator | 00:01:36.927 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-03 00:01:36.927250 | orchestrator | 00:01:36.927 STDOUT terraform:  + all_tags = (known after apply) 2025-09-03 00:01:36.927275 | orchestrator | 00:01:36.927 STDOUT terraform:  + availability_zone = "nova" 2025-09-03 00:01:36.927283 | orchestrator | 00:01:36.927 STDOUT terraform:  + config_drive = true 2025-09-03 00:01:36.927336 | orchestrator | 00:01:36.927 STDOUT terraform:  + created = (known after apply) 2025-09-03 00:01:36.927376 | orchestrator | 00:01:36.927 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-03 00:01:36.927411 | orchestrator | 00:01:36.927 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-03 00:01:36.927458 | orchestrator | 00:01:36.927 STDOUT terraform:  + force_delete = false 2025-09-03 00:01:36.927498 | orchestrator | 00:01:36.927 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-03 00:01:36.927542 | orchestrator | 00:01:36.927 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.927583 | orchestrator | 00:01:36.927 STDOUT terraform:  + image_id = (known after apply) 2025-09-03 00:01:36.927624 | orchestrator | 00:01:36.927 STDOUT terraform:  + image_name = (known after apply) 2025-09-03 00:01:36.927648 | orchestrator | 00:01:36.927 STDOUT terraform:  + key_pair = "testbed" 2025-09-03 00:01:36.927685 | orchestrator | 00:01:36.927 STDOUT terraform:  + name = "testbed-node-5" 2025-09-03 00:01:36.927711 | orchestrator | 00:01:36.927 STDOUT terraform:  + power_state = "active" 2025-09-03 00:01:36.927751 | orchestrator | 00:01:36.927 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.927793 | orchestrator | 00:01:36.927 STDOUT terraform:  + security_groups = (known after apply) 2025-09-03 00:01:36.927817 | orchestrator | 00:01:36.927 STDOUT terraform:  + stop_before_destroy = false 2025-09-03 00:01:36.927859 | orchestrator | 00:01:36.927 STDOUT terraform:  + updated = (known after apply) 2025-09-03 00:01:36.927948 | orchestrator | 00:01:36.927 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-03 00:01:36.927954 | orchestrator | 00:01:36.927 STDOUT terraform:  + block_device { 2025-09-03 00:01:36.927964 | orchestrator | 00:01:36.927 STDOUT terraform:  + boot_index = 0 2025-09-03 00:01:36.928006 | orchestrator | 00:01:36.927 STDOUT terraform:  + delete_on_termination = false 2025-09-03 00:01:36.928044 | orchestrator | 00:01:36.927 STDOUT terraform:  + destination_type = "volume" 2025-09-03 00:01:36.928077 | orchestrator | 00:01:36.928 STDOUT terraform:  + multiattach = false 2025-09-03 00:01:36.928112 | orchestrator | 00:01:36.928 STDOUT terraform:  + source_type = "volume" 2025-09-03 00:01:36.928158 | orchestrator | 00:01:36.928 STDOUT terraform:  + uuid = (known after apply) 2025-09-03 00:01:36.928163 | orchestrator | 00:01:36.928 STDOUT terraform:  } 2025-09-03 00:01:36.928170 | orchestrator | 00:01:36.928 STDOUT terraform:  + network { 2025-09-03 00:01:36.928203 | orchestrator | 00:01:36.928 STDOUT terraform:  + access_network = false 2025-09-03 00:01:36.928241 | orchestrator | 00:01:36.928 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-03 00:01:36.928279 | orchestrator | 00:01:36.928 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-03 00:01:36.928318 | orchestrator | 00:01:36.928 STDOUT terraform:  + mac = (known after apply) 2025-09-03 00:01:36.928357 | orchestrator | 00:01:36.928 STDOUT terraform:  + name = (known after apply) 2025-09-03 00:01:36.928396 | orchestrator | 00:01:36.928 STDOUT terraform:  + port = (known after apply) 2025-09-03 00:01:36.928439 | orchestrator | 00:01:36.928 STDOUT terraform:  + uuid = (known after apply) 2025-09-03 00:01:36.928480 | orchestrator | 00:01:36.928 STDOUT terraform:  } 2025-09-03 00:01:36.928487 | orchestrator | 00:01:36.928 STDOUT terraform:  } 2025-09-03 00:01:36.928546 | orchestrator | 00:01:36.928 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-03 00:01:36.928590 | orchestrator | 00:01:36.928 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-03 00:01:36.928627 | orchestrator | 00:01:36.928 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-03 00:01:36.928662 | orchestrator | 00:01:36.928 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.928672 | orchestrator | 00:01:36.928 STDOUT terraform:  + name = "testbed" 2025-09-03 00:01:36.928718 | orchestrator | 00:01:36.928 STDOUT terraform:  + private_key = (sensitive value) 2025-09-03 00:01:36.928742 | orchestrator | 00:01:36.928 STDOUT terraform:  + public_key = (known after apply) 2025-09-03 00:01:36.928776 | orchestrator | 00:01:36.928 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.928811 | orchestrator | 00:01:36.928 STDOUT terraform:  + user_id = (known after apply) 2025-09-03 00:01:36.928816 | orchestrator | 00:01:36.928 STDOUT terraform:  } 2025-09-03 00:01:36.928873 | orchestrator | 00:01:36.928 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-03 00:01:36.928925 | orchestrator | 00:01:36.928 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-03 00:01:36.928956 | orchestrator | 00:01:36.928 STDOUT terraform:  + device = (known after apply) 2025-09-03 00:01:36.928989 | orchestrator | 00:01:36.928 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.929016 | orchestrator | 00:01:36.928 STDOUT terraform:  + instance_id = (known after apply) 2025-09-03 00:01:36.929051 | orchestrator | 00:01:36.929 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.929076 | orchestrator | 00:01:36.929 STDOUT terraform:  + volume_id = (known after apply) 2025-09-03 00:01:36.929082 | orchestrator | 00:01:36.929 STDOUT terraform:  } 2025-09-03 00:01:36.929139 | orchestrator | 00:01:36.929 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-03 00:01:36.929191 | orchestrator | 00:01:36.929 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-03 00:01:36.929216 | orchestrator | 00:01:36.929 STDOUT terraform:  + device = (known after apply) 2025-09-03 00:01:36.929249 | orchestrator | 00:01:36.929 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.929280 | orchestrator | 00:01:36.929 STDOUT terraform:  + instance_id = (known after apply) 2025-09-03 00:01:36.929313 | orchestrator | 00:01:36.929 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.929345 | orchestrator | 00:01:36.929 STDOUT terraform:  + volume_id = (known after apply) 2025-09-03 00:01:36.929350 | orchestrator | 00:01:36.929 STDOUT terraform:  } 2025-09-03 00:01:36.929406 | orchestrator | 00:01:36.929 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-03 00:01:36.929482 | orchestrator | 00:01:36.929 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-03 00:01:36.929491 | orchestrator | 00:01:36.929 STDOUT terraform:  + device = (known after apply) 2025-09-03 00:01:36.929526 | orchestrator | 00:01:36.929 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.929557 | orchestrator | 00:01:36.929 STDOUT terraform:  + instance_id = (known after apply) 2025-09-03 00:01:36.929581 | orchestrator | 00:01:36.929 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.929614 | orchestrator | 00:01:36.929 STDOUT terraform:  + volume_id = (known after apply) 2025-09-03 00:01:36.929619 | orchestrator | 00:01:36.929 STDOUT terraform:  } 2025-09-03 00:01:36.929678 | orchestrator | 00:01:36.929 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-03 00:01:36.929730 | orchestrator | 00:01:36.929 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-03 00:01:36.929761 | orchestrator | 00:01:36.929 STDOUT terraform:  + device = (known after apply) 2025-09-03 00:01:36.929787 | orchestrator | 00:01:36.929 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.929818 | orchestrator | 00:01:36.929 STDOUT terraform:  + instance_id = (known after apply) 2025-09-03 00:01:36.929843 | orchestrator | 00:01:36.929 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.929876 | orchestrator | 00:01:36.929 STDOUT terraform:  + volume_id = (known after apply) 2025-09-03 00:01:36.929882 | orchestrator | 00:01:36.929 STDOUT terraform:  } 2025-09-03 00:01:36.929937 | orchestrator | 00:01:36.929 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-03 00:01:36.929990 | orchestrator | 00:01:36.929 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-03 00:01:36.930034 | orchestrator | 00:01:36.929 STDOUT terraform:  + device = (known after apply) 2025-09-03 00:01:36.930100 | orchestrator | 00:01:36.930 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.930253 | orchestrator | 00:01:36.930 STDOUT terraform:  + instance_id = (known after apply) 2025-09-03 00:01:36.930403 | orchestrator | 00:01:36.930 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.930596 | orchestrator | 00:01:36.930 STDOUT terraform:  + volume_id = (known after apply) 2025-09-03 00:01:36.930605 | orchestrator | 00:01:36.930 STDOUT terraform:  } 2025-09-03 00:01:36.930747 | orchestrator | 00:01:36.930 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-03 00:01:36.930836 | orchestrator | 00:01:36.930 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-03 00:01:36.930949 | orchestrator | 00:01:36.930 STDOUT terraform:  + device = (known after apply) 2025-09-03 00:01:36.931020 | orchestrator | 00:01:36.930 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.931061 | orchestrator | 00:01:36.931 STDOUT terraform:  + instance_id = (known after apply) 2025-09-03 00:01:36.933021 | orchestrator | 00:01:36.931 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.933083 | orchestrator | 00:01:36.932 STDOUT terraform:  + volume_id = (known after apply) 2025-09-03 00:01:36.933227 | orchestrator | 00:01:36.933 STDOUT terraform:  } 2025-09-03 00:01:36.933432 | orchestrator | 00:01:36.933 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-03 00:01:36.933606 | orchestrator | 00:01:36.933 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-03 00:01:36.933843 | orchestrator | 00:01:36.933 STDOUT terraform:  + device = (known after apply) 2025-09-03 00:01:36.933944 | orchestrator | 00:01:36.933 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.934060 | orchestrator | 00:01:36.933 STDOUT terraform:  + instance_id = (known after apply) 2025-09-03 00:01:36.934130 | orchestrator | 00:01:36.933 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.934135 | orchestrator | 00:01:36.934 STDOUT terraform:  + volume_id = (known after apply) 2025-09-03 00:01:36.934163 | orchestrator | 00:01:36.934 STDOUT terraform:  } 2025-09-03 00:01:36.934397 | orchestrator | 00:01:36.934 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-03 00:01:36.934567 | orchestrator | 00:01:36.934 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-03 00:01:36.934620 | orchestrator | 00:01:36.934 STDOUT terraform:  + device = (known after apply) 2025-09-03 00:01:36.934780 | orchestrator | 00:01:36.934 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.934833 | orchestrator | 00:01:36.934 STDOUT terraform:  + instance_id = (known after apply) 2025-09-03 00:01:36.934906 | orchestrator | 00:01:36.934 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.935024 | orchestrator | 00:01:36.934 STDOUT terraform:  + volume_id = (known after apply) 2025-09-03 00:01:36.935081 | orchestrator | 00:01:36.935 STDOUT terraform:  } 2025-09-03 00:01:36.935467 | orchestrator | 00:01:36.935 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-03 00:01:36.935804 | orchestrator | 00:01:36.935 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-03 00:01:36.935908 | orchestrator | 00:01:36.935 STDOUT terraform:  + device = (known after apply) 2025-09-03 00:01:36.936110 | orchestrator | 00:01:36.936 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.936180 | orchestrator | 00:01:36.936 STDOUT terraform:  + instance_id = (known after apply) 2025-09-03 00:01:36.936316 | orchestrator | 00:01:36.936 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.936456 | orchestrator | 00:01:36.936 STDOUT terraform:  + volume_id = (known after apply) 2025-09-03 00:01:36.936554 | orchestrator | 00:01:36.936 STDOUT terraform:  } 2025-09-03 00:01:36.936720 | orchestrator | 00:01:36.936 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-03 00:01:36.936903 | orchestrator | 00:01:36.936 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-03 00:01:36.936950 | orchestrator | 00:01:36.936 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-03 00:01:36.937029 | orchestrator | 00:01:36.936 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-03 00:01:36.937128 | orchestrator | 00:01:36.937 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.937172 | orchestrator | 00:01:36.937 STDOUT terraform:  + port_id = (known after apply) 2025-09-03 00:01:36.937381 | orchestrator | 00:01:36.937 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.937387 | orchestrator | 00:01:36.937 STDOUT terraform:  } 2025-09-03 00:01:36.937581 | orchestrator | 00:01:36.937 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-03 00:01:36.937783 | orchestrator | 00:01:36.937 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-03 00:01:36.938102 | orchestrator | 00:01:36.937 STDOUT terraform:  + address = (known after apply) 2025-09-03 00:01:36.938237 | orchestrator | 00:01:36.938 STDOUT terraform:  + all_tags = (known after apply) 2025-09-03 00:01:36.938245 | orchestrator | 00:01:36.938 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-03 00:01:36.938363 | orchestrator | 00:01:36.938 STDOUT terraform:  + dns_name = (known after apply) 2025-09-03 00:01:36.938414 | orchestrator | 00:01:36.938 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-03 00:01:36.938477 | orchestrator | 00:01:36.938 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.938556 | orchestrator | 00:01:36.938 STDOUT terraform:  + pool = "public" 2025-09-03 00:01:36.938650 | orchestrator | 00:01:36.938 STDOUT terraform:  + port_id = (known after apply) 2025-09-03 00:01:36.938659 | orchestrator | 00:01:36.938 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.938762 | orchestrator | 00:01:36.938 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-03 00:01:36.938808 | orchestrator | 00:01:36.938 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-03 00:01:36.938917 | orchestrator | 00:01:36.938 STDOUT terraform:  } 2025-09-03 00:01:36.939035 | orchestrator | 00:01:36.938 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-03 00:01:36.939239 | orchestrator | 00:01:36.939 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-03 00:01:36.939246 | orchestrator | 00:01:36.939 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-03 00:01:36.939311 | orchestrator | 00:01:36.939 STDOUT terraform:  + all_tags = (known after apply) 2025-09-03 00:01:36.939414 | orchestrator | 00:01:36.939 STDOUT terraform:  + availability_zone_hints = [ 2025-09-03 00:01:36.939442 | orchestrator | 00:01:36.939 STDOUT terraform:  + "nova", 2025-09-03 00:01:36.939448 | orchestrator | 00:01:36.939 STDOUT terraform:  ] 2025-09-03 00:01:36.939619 | orchestrator | 00:01:36.939 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-03 00:01:36.939750 | orchestrator | 00:01:36.939 STDOUT terraform:  + external = (known after apply) 2025-09-03 00:01:36.939918 | orchestrator | 00:01:36.939 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.939980 | orchestrator | 00:01:36.939 STDOUT terraform:  + mtu = (known after apply) 2025-09-03 00:01:36.940196 | orchestrator | 00:01:36.940 STDOUT terraform:  + name = "net-testbed-management" 2025-09-03 00:01:36.940463 | orchestrator | 00:01:36.940 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-03 00:01:36.940617 | orchestrator | 00:01:36.940 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-03 00:01:36.940741 | orchestrator | 00:01:36.940 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.942159 | orchestrator | 00:01:36.940 STDOUT terraform:  + shared = (known after apply) 2025-09-03 00:01:36.942184 | orchestrator | 00:01:36.940 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-03 00:01:36.942188 | orchestrator | 00:01:36.940 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-03 00:01:36.942192 | orchestrator | 00:01:36.940 STDOUT terraform:  + segments (known after apply) 2025-09-03 00:01:36.942197 | orchestrator | 00:01:36.940 STDOUT terraform:  } 2025-09-03 00:01:36.942202 | orchestrator | 00:01:36.940 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-03 00:01:36.942206 | orchestrator | 00:01:36.940 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-03 00:01:36.942210 | orchestrator | 00:01:36.941 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-03 00:01:36.942215 | orchestrator | 00:01:36.941 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-03 00:01:36.942219 | orchestrator | 00:01:36.941 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-03 00:01:36.942223 | orchestrator | 00:01:36.941 STDOUT terraform:  + all_tags = (known after apply) 2025-09-03 00:01:36.942227 | orchestrator | 00:01:36.941 STDOUT terraform:  + device_id = (known after apply) 2025-09-03 00:01:36.942238 | orchestrator | 00:01:36.941 STDOUT terraform:  + device_owner = (known after apply) 2025-09-03 00:01:36.942242 | orchestrator | 00:01:36.941 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-03 00:01:36.942246 | orchestrator | 00:01:36.941 STDOUT terraform:  + dns_name = (known after apply) 2025-09-03 00:01:36.942250 | orchestrator | 00:01:36.941 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.942253 | orchestrator | 00:01:36.941 STDOUT terraform:  + mac_address = (known after apply) 2025-09-03 00:01:36.942257 | orchestrator | 00:01:36.941 STDOUT terraform:  + network_id = (known after apply) 2025-09-03 00:01:36.942261 | orchestrator | 00:01:36.941 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-03 00:01:36.942265 | orchestrator | 00:01:36.941 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-03 00:01:36.942269 | orchestrator | 00:01:36.941 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.942272 | orchestrator | 00:01:36.941 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-03 00:01:36.942276 | orchestrator | 00:01:36.941 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-03 00:01:36.942280 | orchestrator | 00:01:36.941 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.942284 | orchestrator | 00:01:36.941 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-03 00:01:36.942287 | orchestrator | 00:01:36.941 STDOUT terraform:  } 2025-09-03 00:01:36.942291 | orchestrator | 00:01:36.941 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.942295 | orchestrator | 00:01:36.941 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-03 00:01:36.942299 | orchestrator | 00:01:36.941 STDOUT terraform:  } 2025-09-03 00:01:36.942303 | orchestrator | 00:01:36.941 STDOUT terraform:  + binding (known after apply) 2025-09-03 00:01:36.942307 | orchestrator | 00:01:36.941 STDOUT terraform:  + fixed_ip { 2025-09-03 00:01:36.942310 | orchestrator | 00:01:36.941 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-03 00:01:36.942314 | orchestrator | 00:01:36.941 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-03 00:01:36.942318 | orchestrator | 00:01:36.941 STDOUT terraform:  } 2025-09-03 00:01:36.942322 | orchestrator | 00:01:36.941 STDOUT terraform:  } 2025-09-03 00:01:36.942326 | orchestrator | 00:01:36.941 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-03 00:01:36.942330 | orchestrator | 00:01:36.941 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-03 00:01:36.942345 | orchestrator | 00:01:36.941 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-03 00:01:36.942349 | orchestrator | 00:01:36.941 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-03 00:01:36.942353 | orchestrator | 00:01:36.941 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-03 00:01:36.942356 | orchestrator | 00:01:36.941 STDOUT terraform:  + all_tags = (known after apply) 2025-09-03 00:01:36.942360 | orchestrator | 00:01:36.941 STDOUT terraform:  + device_id = (known after apply) 2025-09-03 00:01:36.942367 | orchestrator | 00:01:36.941 STDOUT terraform:  + device_owner = (known after apply) 2025-09-03 00:01:36.942371 | orchestrator | 00:01:36.942 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-03 00:01:36.942375 | orchestrator | 00:01:36.942 STDOUT terraform:  + dns_name = (known after apply) 2025-09-03 00:01:36.942378 | orchestrator | 00:01:36.942 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.947116 | orchestrator | 00:01:36.942 STDOUT terraform:  + mac_address = (known after apply) 2025-09-03 00:01:36.947268 | orchestrator | 00:01:36.947 STDOUT terraform:  + network_id = (known after apply) 2025-09-03 00:01:36.947308 | orchestrator | 00:01:36.947 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-03 00:01:36.947347 | orchestrator | 00:01:36.947 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-03 00:01:36.947530 | orchestrator | 00:01:36.947 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.947591 | orchestrator | 00:01:36.947 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-03 00:01:36.947715 | orchestrator | 00:01:36.947 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-03 00:01:36.947836 | orchestrator | 00:01:36.947 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.947866 | orchestrator | 00:01:36.947 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-03 00:01:36.947882 | orchestrator | 00:01:36.947 STDOUT terraform:  } 2025-09-03 00:01:36.947976 | orchestrator | 00:01:36.947 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.948064 | orchestrator | 00:01:36.947 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-03 00:01:36.948072 | orchestrator | 00:01:36.948 STDOUT terraform:  } 2025-09-03 00:01:36.948156 | orchestrator | 00:01:36.948 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.948225 | orchestrator | 00:01:36.948 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-03 00:01:36.948231 | orchestrator | 00:01:36.948 STDOUT terraform:  } 2025-09-03 00:01:36.948296 | orchestrator | 00:01:36.948 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.948437 | orchestrator | 00:01:36.948 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-03 00:01:36.948577 | orchestrator | 00:01:36.948 STDOUT terraform:  } 2025-09-03 00:01:36.948652 | orchestrator | 00:01:36.948 STDOUT terraform:  + binding (known after apply) 2025-09-03 00:01:36.948745 | orchestrator | 00:01:36.948 STDOUT terraform:  + fixed_ip { 2025-09-03 00:01:36.948778 | orchestrator | 00:01:36.948 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-03 00:01:36.948853 | orchestrator | 00:01:36.948 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-03 00:01:36.948929 | orchestrator | 00:01:36.948 STDOUT terraform:  } 2025-09-03 00:01:36.948975 | orchestrator | 00:01:36.948 STDOUT terraform:  } 2025-09-03 00:01:36.949045 | orchestrator | 00:01:36.948 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-03 00:01:36.949125 | orchestrator | 00:01:36.948 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-03 00:01:36.949165 | orchestrator | 00:01:36.949 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-03 00:01:36.949284 | orchestrator | 00:01:36.949 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-03 00:01:36.949353 | orchestrator | 00:01:36.949 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-03 00:01:36.949745 | orchestrator | 00:01:36.949 STDOUT terraform:  + all_tags = (known after apply) 2025-09-03 00:01:36.949782 | orchestrator | 00:01:36.949 STDOUT terraform:  + device_id = (known after apply) 2025-09-03 00:01:36.949903 | orchestrator | 00:01:36.949 STDOUT terraform:  + device_owner = (known after apply) 2025-09-03 00:01:36.950062 | orchestrator | 00:01:36.949 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-03 00:01:36.950133 | orchestrator | 00:01:36.950 STDOUT terraform:  + dns_name = (known after apply) 2025-09-03 00:01:36.950226 | orchestrator | 00:01:36.950 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.950500 | orchestrator | 00:01:36.950 STDOUT terraform:  + mac_address = (known after apply) 2025-09-03 00:01:36.950567 | orchestrator | 00:01:36.950 STDOUT terraform:  + network_id = (known after apply) 2025-09-03 00:01:36.950693 | orchestrator | 00:01:36.950 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-03 00:01:36.950781 | orchestrator | 00:01:36.950 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-03 00:01:36.951013 | orchestrator | 00:01:36.950 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.951103 | orchestrator | 00:01:36.950 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-03 00:01:36.951152 | orchestrator | 00:01:36.951 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-03 00:01:36.951188 | orchestrator | 00:01:36.951 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.951302 | orchestrator | 00:01:36.951 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-03 00:01:36.951308 | orchestrator | 00:01:36.951 STDOUT terraform:  } 2025-09-03 00:01:36.951387 | orchestrator | 00:01:36.951 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.951433 | orchestrator | 00:01:36.951 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-03 00:01:36.951593 | orchestrator | 00:01:36.951 STDOUT terraform:  } 2025-09-03 00:01:36.951620 | orchestrator | 00:01:36.951 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.951626 | orchestrator | 00:01:36.951 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-03 00:01:36.951721 | orchestrator | 00:01:36.951 STDOUT terraform:  } 2025-09-03 00:01:36.951753 | orchestrator | 00:01:36.951 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.951785 | orchestrator | 00:01:36.951 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-03 00:01:36.951822 | orchestrator | 00:01:36.951 STDOUT terraform:  } 2025-09-03 00:01:36.951865 | orchestrator | 00:01:36.951 STDOUT terraform:  + binding (known after apply) 2025-09-03 00:01:36.951893 | orchestrator | 00:01:36.951 STDOUT terraform:  + fixed_ip { 2025-09-03 00:01:36.952044 | orchestrator | 00:01:36.951 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-03 00:01:36.952111 | orchestrator | 00:01:36.952 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-03 00:01:36.952116 | orchestrator | 00:01:36.952 STDOUT terraform:  } 2025-09-03 00:01:36.952120 | orchestrator | 00:01:36.952 STDOUT terraform:  } 2025-09-03 00:01:36.952205 | orchestrator | 00:01:36.952 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-03 00:01:36.952451 | orchestrator | 00:01:36.952 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-03 00:01:36.952609 | orchestrator | 00:01:36.952 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-03 00:01:36.952640 | orchestrator | 00:01:36.952 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-03 00:01:36.952771 | orchestrator | 00:01:36.952 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-03 00:01:36.952950 | orchestrator | 00:01:36.952 STDOUT terraform:  + all_tags = (known after apply) 2025-09-03 00:01:36.952987 | orchestrator | 00:01:36.952 STDOUT terraform:  + device_id = (known after apply) 2025-09-03 00:01:36.953060 | orchestrator | 00:01:36.952 STDOUT terraform:  + device_owner = (known after apply) 2025-09-03 00:01:36.953238 | orchestrator | 00:01:36.953 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-03 00:01:36.953500 | orchestrator | 00:01:36.953 STDOUT terraform:  + dns_name = (known after apply) 2025-09-03 00:01:36.953659 | orchestrator | 00:01:36.953 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.953675 | orchestrator | 00:01:36.953 STDOUT terraform:  + mac_address = (known after apply) 2025-09-03 00:01:36.953786 | orchestrator | 00:01:36.953 STDOUT terraform:  + network_id = (known after apply) 2025-09-03 00:01:36.953972 | orchestrator | 00:01:36.953 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-03 00:01:36.954091 | orchestrator | 00:01:36.953 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-03 00:01:36.954133 | orchestrator | 00:01:36.953 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.954332 | orchestrator | 00:01:36.954 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-03 00:01:36.954340 | orchestrator | 00:01:36.954 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-03 00:01:36.954346 | orchestrator | 00:01:36.954 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.954374 | orchestrator | 00:01:36.954 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-03 00:01:36.954382 | orchestrator | 00:01:36.954 STDOUT terraform:  } 2025-09-03 00:01:36.954405 | orchestrator | 00:01:36.954 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.954512 | orchestrator | 00:01:36.954 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-03 00:01:36.954518 | orchestrator | 00:01:36.954 STDOUT terraform:  } 2025-09-03 00:01:36.954620 | orchestrator | 00:01:36.954 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.954626 | orchestrator | 00:01:36.954 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-03 00:01:36.954736 | orchestrator | 00:01:36.954 STDOUT terraform:  } 2025-09-03 00:01:36.954742 | orchestrator | 00:01:36.954 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.954746 | orchestrator | 00:01:36.954 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-03 00:01:36.954782 | orchestrator | 00:01:36.954 STDOUT terraform:  } 2025-09-03 00:01:36.954875 | orchestrator | 00:01:36.954 STDOUT terraform:  + binding (known after apply) 2025-09-03 00:01:36.954916 | orchestrator | 00:01:36.954 STDOUT terraform:  + fixed_ip { 2025-09-03 00:01:36.955177 | orchestrator | 00:01:36.954 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-03 00:01:36.955195 | orchestrator | 00:01:36.954 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-03 00:01:36.955199 | orchestrator | 00:01:36.955 STDOUT terraform:  } 2025-09-03 00:01:36.955221 | orchestrator | 00:01:36.955 STDOUT terraform:  } 2025-09-03 00:01:36.955227 | orchestrator | 00:01:36.955 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-03 00:01:36.955232 | orchestrator | 00:01:36.955 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-03 00:01:36.955337 | orchestrator | 00:01:36.955 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-03 00:01:36.955509 | orchestrator | 00:01:36.955 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-03 00:01:36.955517 | orchestrator | 00:01:36.955 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-03 00:01:36.955664 | orchestrator | 00:01:36.955 STDOUT terraform:  + all_tags = (known after apply) 2025-09-03 00:01:36.955783 | orchestrator | 00:01:36.955 STDOUT terraform:  + device_id = (known after apply) 2025-09-03 00:01:36.955829 | orchestrator | 00:01:36.955 STDOUT terraform:  + device_owner = (known after apply) 2025-09-03 00:01:36.956041 | orchestrator | 00:01:36.955 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-03 00:01:36.956103 | orchestrator | 00:01:36.955 STDOUT terraform:  + dns_name = (known after apply) 2025-09-03 00:01:36.956319 | orchestrator | 00:01:36.955 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.956405 | orchestrator | 00:01:36.956 STDOUT terraform:  + mac_address = (known after apply) 2025-09-03 00:01:36.956556 | orchestrator | 00:01:36.956 STDOUT terraform:  + network_id = (known after apply) 2025-09-03 00:01:36.956562 | orchestrator | 00:01:36.956 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-03 00:01:36.956567 | orchestrator | 00:01:36.956 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-03 00:01:36.956620 | orchestrator | 00:01:36.956 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.956659 | orchestrator | 00:01:36.956 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-03 00:01:36.956664 | orchestrator | 00:01:36.956 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-03 00:01:36.956781 | orchestrator | 00:01:36.956 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.957102 | orchestrator | 00:01:36.956 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-03 00:01:36.957216 | orchestrator | 00:01:36.956 STDOUT terraform:  } 2025-09-03 00:01:36.957221 | orchestrator | 00:01:36.956 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.957225 | orchestrator | 00:01:36.956 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-03 00:01:36.957229 | orchestrator | 00:01:36.956 STDOUT terraform:  } 2025-09-03 00:01:36.957366 | orchestrator | 00:01:36.956 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.957372 | orchestrator | 00:01:36.956 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-03 00:01:36.957376 | orchestrator | 00:01:36.956 STDOUT terraform:  } 2025-09-03 00:01:36.957439 | orchestrator | 00:01:36.957 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.957444 | orchestrator | 00:01:36.957 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-03 00:01:36.957448 | orchestrator | 00:01:36.957 STDOUT terraform:  } 2025-09-03 00:01:36.957452 | orchestrator | 00:01:36.957 STDOUT terraform:  + binding (known after apply) 2025-09-03 00:01:36.957772 | orchestrator | 00:01:36.957 STDOUT terraform:  + fixed_ip { 2025-09-03 00:01:36.957788 | orchestrator | 00:01:36.957 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-03 00:01:36.957858 | orchestrator | 00:01:36.957 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-03 00:01:36.957908 | orchestrator | 00:01:36.957 STDOUT terraform:  } 2025-09-03 00:01:36.957925 | orchestrator | 00:01:36.957 STDOUT terraform:  } 2025-09-03 00:01:36.957949 | orchestrator | 00:01:36.957 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-03 00:01:36.957956 | orchestrator | 00:01:36.957 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-03 00:01:36.957960 | orchestrator | 00:01:36.957 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-03 00:01:36.958125 | orchestrator | 00:01:36.957 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-03 00:01:36.958222 | orchestrator | 00:01:36.958 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-03 00:01:36.958259 | orchestrator | 00:01:36.958 STDOUT terraform:  + all_tags = (known after apply) 2025-09-03 00:01:36.959653 | orchestrator | 00:01:36.958 STDOUT terraform:  + device_id = (known after apply) 2025-09-03 00:01:36.960027 | orchestrator | 00:01:36.958 STDOUT terraform:  + device_owner = (known after apply) 2025-09-03 00:01:36.960109 | orchestrator | 00:01:36.958 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-03 00:01:36.960199 | orchestrator | 00:01:36.958 STDOUT terraform:  + dns_name = (known after apply) 2025-09-03 00:01:36.960204 | orchestrator | 00:01:36.958 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.960208 | orchestrator | 00:01:36.958 STDOUT terraform:  + mac_address = (known after apply) 2025-09-03 00:01:36.960212 | orchestrator | 00:01:36.959 STDOUT terraform:  + network_id = (known after apply) 2025-09-03 00:01:36.960221 | orchestrator | 00:01:36.959 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-03 00:01:36.960252 | orchestrator | 00:01:36.959 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-03 00:01:36.960257 | orchestrator | 00:01:36.959 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.960261 | orchestrator | 00:01:36.959 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-03 00:01:36.960286 | orchestrator | 00:01:36.959 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-03 00:01:36.960291 | orchestrator | 00:01:36.959 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.960348 | orchestrator | 00:01:36.959 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-03 00:01:36.960377 | orchestrator | 00:01:36.959 STDOUT terraform:  } 2025-09-03 00:01:36.960394 | orchestrator | 00:01:36.959 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.960473 | orchestrator | 00:01:36.959 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-03 00:01:36.960502 | orchestrator | 00:01:36.959 STDOUT terraform:  } 2025-09-03 00:01:36.960526 | orchestrator | 00:01:36.959 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.960531 | orchestrator | 00:01:36.959 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-03 00:01:36.960559 | orchestrator | 00:01:36.959 STDOUT terraform:  } 2025-09-03 00:01:36.960584 | orchestrator | 00:01:36.959 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.960598 | orchestrator | 00:01:36.959 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-03 00:01:36.960612 | orchestrator | 00:01:36.959 STDOUT terraform:  } 2025-09-03 00:01:36.960691 | orchestrator | 00:01:36.959 STDOUT terraform:  + binding (known after apply) 2025-09-03 00:01:36.960717 | orchestrator | 00:01:36.959 STDOUT terraform:  + fixed_ip { 2025-09-03 00:01:36.960721 | orchestrator | 00:01:36.959 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-03 00:01:36.960735 | orchestrator | 00:01:36.959 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-03 00:01:36.960750 | orchestrator | 00:01:36.959 STDOUT terraform:  } 2025-09-03 00:01:36.960780 | orchestrator | 00:01:36.959 STDOUT terraform:  } 2025-09-03 00:01:36.960849 | orchestrator | 00:01:36.959 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-03 00:01:36.960923 | orchestrator | 00:01:36.959 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-03 00:01:36.960928 | orchestrator | 00:01:36.959 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-03 00:01:36.960967 | orchestrator | 00:01:36.959 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-03 00:01:36.960972 | orchestrator | 00:01:36.959 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-03 00:01:36.961058 | orchestrator | 00:01:36.959 STDOUT terraform:  + all_tags = (known after apply) 2025-09-03 00:01:36.961089 | orchestrator | 00:01:36.959 STDOUT terraform:  + device_id = (known after apply) 2025-09-03 00:01:36.961130 | orchestrator | 00:01:36.959 STDOUT terraform:  + device_owner = (known after apply) 2025-09-03 00:01:36.961139 | orchestrator | 00:01:36.959 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-03 00:01:36.961211 | orchestrator | 00:01:36.960 STDOUT terraform:  + dns_name = (known after apply) 2025-09-03 00:01:36.961228 | orchestrator | 00:01:36.960 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.961250 | orchestrator | 00:01:36.960 STDOUT terraform:  + mac_address = (known after apply) 2025-09-03 00:01:36.961255 | orchestrator | 00:01:36.960 STDOUT terraform:  + network_id = (known after apply) 2025-09-03 00:01:36.961259 | orchestrator | 00:01:36.960 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-03 00:01:36.961344 | orchestrator | 00:01:36.960 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-03 00:01:36.961349 | orchestrator | 00:01:36.960 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.961353 | orchestrator | 00:01:36.960 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-03 00:01:36.961357 | orchestrator | 00:01:36.960 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-03 00:01:36.961385 | orchestrator | 00:01:36.960 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.961399 | orchestrator | 00:01:36.960 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-03 00:01:36.961436 | orchestrator | 00:01:36.960 STDOUT terraform:  } 2025-09-03 00:01:36.961496 | orchestrator | 00:01:36.961 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.961546 | orchestrator | 00:01:36.961 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-03 00:01:36.961575 | orchestrator | 00:01:36.961 STDOUT terraform:  } 2025-09-03 00:01:36.961649 | orchestrator | 00:01:36.961 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.961710 | orchestrator | 00:01:36.961 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-03 00:01:36.961729 | orchestrator | 00:01:36.961 STDOUT terraform:  } 2025-09-03 00:01:36.961733 | orchestrator | 00:01:36.961 STDOUT terraform:  + allowed_address_pairs { 2025-09-03 00:01:36.961737 | orchestrator | 00:01:36.961 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-03 00:01:36.961755 | orchestrator | 00:01:36.961 STDOUT terraform:  } 2025-09-03 00:01:36.961762 | orchestrator | 00:01:36.961 STDOUT terraform:  + binding (known after apply) 2025-09-03 00:01:36.961854 | orchestrator | 00:01:36.961 STDOUT terraform:  + fixed_ip { 2025-09-03 00:01:36.961874 | orchestrator | 00:01:36.961 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-03 00:01:36.961880 | orchestrator | 00:01:36.961 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-03 00:01:36.961885 | orchestrator | 00:01:36.961 STDOUT terraform:  } 2025-09-03 00:01:36.961919 | orchestrator | 00:01:36.961 STDOUT terraform:  } 2025-09-03 00:01:36.962003 | orchestrator | 00:01:36.961 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-03 00:01:36.962060 | orchestrator | 00:01:36.961 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-03 00:01:36.962155 | orchestrator | 00:01:36.962 STDOUT terraform:  + force_destroy = false 2025-09-03 00:01:36.962194 | orchestrator | 00:01:36.962 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.962253 | orchestrator | 00:01:36.962 STDOUT terraform:  + port_id = (known after apply) 2025-09-03 00:01:36.962288 | orchestrator | 00:01:36.962 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.962370 | orchestrator | 00:01:36.962 STDOUT terraform:  + router_id = (known after apply) 2025-09-03 00:01:36.962496 | orchestrator | 00:01:36.962 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-03 00:01:36.962514 | orchestrator | 00:01:36.962 STDOUT terraform:  } 2025-09-03 00:01:36.962676 | orchestrator | 00:01:36.962 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-03 00:01:36.962770 | orchestrator | 00:01:36.962 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-03 00:01:36.962997 | orchestrator | 00:01:36.962 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-03 00:01:36.963070 | orchestrator | 00:01:36.962 STDOUT terraform:  + all_tags = (known after apply) 2025-09-03 00:01:36.963086 | orchestrator | 00:01:36.963 STDOUT terraform:  + availability_zone_hints = [ 2025-09-03 00:01:36.963094 | orchestrator | 00:01:36.963 STDOUT terraform:  + "nova", 2025-09-03 00:01:36.963136 | orchestrator | 00:01:36.963 STDOUT terraform:  ] 2025-09-03 00:01:36.963258 | orchestrator | 00:01:36.963 STDOUT terraform:  + distributed = (known after apply) 2025-09-03 00:01:36.963375 | orchestrator | 00:01:36.963 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-03 00:01:36.963619 | orchestrator | 00:01:36.963 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-03 00:01:36.963756 | orchestrator | 00:01:36.963 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-03 00:01:36.963988 | orchestrator | 00:01:36.963 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.964158 | orchestrator | 00:01:36.963 STDOUT terraform:  + name = "testbed" 2025-09-03 00:01:36.964316 | orchestrator | 00:01:36.964 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.964527 | orchestrator | 00:01:36.964 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-03 00:01:36.964743 | orchestrator | 00:01:36.964 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-03 00:01:36.964783 | orchestrator | 00:01:36.964 STDOUT terraform:  } 2025-09-03 00:01:36.965017 | orchestrator | 00:01:36.964 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-03 00:01:36.965314 | orchestrator | 00:01:36.964 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-03 00:01:36.965407 | orchestrator | 00:01:36.965 STDOUT terraform:  + description = "ssh" 2025-09-03 00:01:36.965467 | orchestrator | 00:01:36.965 STDOUT terraform:  + direction = "ingress" 2025-09-03 00:01:36.965595 | orchestrator | 00:01:36.965 STDOUT terraform:  + ethertype = "IPv4" 2025-09-03 00:01:36.965746 | orchestrator | 00:01:36.965 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.965776 | orchestrator | 00:01:36.965 STDOUT terraform:  + port_range_max = 22 2025-09-03 00:01:36.966038 | orchestrator | 00:01:36.965 STDOUT terraform:  + port_range_min = 22 2025-09-03 00:01:36.966141 | orchestrator | 00:01:36.965 STDOUT terraform:  + protocol = "tcp" 2025-09-03 00:01:36.966155 | orchestrator | 00:01:36.966 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.966303 | orchestrator | 00:01:36.966 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-03 00:01:36.966393 | orchestrator | 00:01:36.966 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-03 00:01:36.966588 | orchestrator | 00:01:36.966 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-03 00:01:36.966989 | orchestrator | 00:01:36.966 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-03 00:01:36.967233 | orchestrator | 00:01:36.966 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-03 00:01:36.967344 | orchestrator | 00:01:36.967 STDOUT terraform:  } 2025-09-03 00:01:36.967840 | orchestrator | 00:01:36.967 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-03 00:01:36.968431 | orchestrator | 00:01:36.967 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-03 00:01:36.968598 | orchestrator | 00:01:36.968 STDOUT terraform:  + description = "wireguard" 2025-09-03 00:01:36.968659 | orchestrator | 00:01:36.968 STDOUT terraform:  + direction = "ingress" 2025-09-03 00:01:36.968775 | orchestrator | 00:01:36.968 STDOUT terraform:  + ethertype = "IPv4" 2025-09-03 00:01:36.969001 | orchestrator | 00:01:36.968 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.969064 | orchestrator | 00:01:36.968 STDOUT terraform:  + port_range_max = 51820 2025-09-03 00:01:36.969158 | orchestrator | 00:01:36.969 STDOUT terraform:  + port_range_min = 51820 2025-09-03 00:01:36.969169 | orchestrator | 00:01:36.969 STDOUT terraform:  + protocol = "udp" 2025-09-03 00:01:36.969356 | orchestrator | 00:01:36.969 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.969517 | orchestrator | 00:01:36.969 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-03 00:01:36.969692 | orchestrator | 00:01:36.969 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-03 00:01:36.969704 | orchestrator | 00:01:36.969 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-03 00:01:36.969912 | orchestrator | 00:01:36.969 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-03 00:01:36.970034 | orchestrator | 00:01:36.969 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-03 00:01:36.970045 | orchestrator | 00:01:36.970 STDOUT terraform:  } 2025-09-03 00:01:36.970110 | orchestrator | 00:01:36.970 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-03 00:01:36.970354 | orchestrator | 00:01:36.970 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-03 00:01:36.970458 | orchestrator | 00:01:36.970 STDOUT terraform:  + direction = "ingress" 2025-09-03 00:01:36.970516 | orchestrator | 00:01:36.970 STDOUT terraform:  + ethertype = "IPv4" 2025-09-03 00:01:36.971168 | orchestrator | 00:01:36.970 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.971704 | orchestrator | 00:01:36.971 STDOUT terraform:  + protocol = "tcp" 2025-09-03 00:01:36.982258 | orchestrator | 00:01:36.971 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.982399 | orchestrator | 00:01:36.982 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-03 00:01:36.982504 | orchestrator | 00:01:36.982 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-03 00:01:36.982647 | orchestrator | 00:01:36.982 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-03 00:01:36.982896 | orchestrator | 00:01:36.982 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-03 00:01:36.982946 | orchestrator | 00:01:36.982 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-03 00:01:36.983047 | orchestrator | 00:01:36.982 STDOUT terraform:  } 2025-09-03 00:01:36.983117 | orchestrator | 00:01:36.983 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-03 00:01:36.983314 | orchestrator | 00:01:36.983 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-03 00:01:36.983445 | orchestrator | 00:01:36.983 STDOUT terraform:  + direction = "ingress" 2025-09-03 00:01:36.983602 | orchestrator | 00:01:36.983 STDOUT terraform:  + ethertype = "IPv4" 2025-09-03 00:01:36.983652 | orchestrator | 00:01:36.983 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.983809 | orchestrator | 00:01:36.983 STDOUT terraform:  + protocol = "udp" 2025-09-03 00:01:36.984054 | orchestrator | 00:01:36.983 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.984089 | orchestrator | 00:01:36.984 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-03 00:01:36.984286 | orchestrator | 00:01:36.984 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-03 00:01:36.984436 | orchestrator | 00:01:36.984 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-03 00:01:36.984582 | orchestrator | 00:01:36.984 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-03 00:01:36.984713 | orchestrator | 00:01:36.984 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-03 00:01:36.984793 | orchestrator | 00:01:36.984 STDOUT terraform:  } 2025-09-03 00:01:36.984954 | orchestrator | 00:01:36.984 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-03 00:01:36.985123 | orchestrator | 00:01:36.984 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-03 00:01:36.985150 | orchestrator | 00:01:36.985 STDOUT terraform:  + direction = "ingress" 2025-09-03 00:01:36.985265 | orchestrator | 00:01:36.985 STDOUT terraform:  + ethertype = "IPv4" 2025-09-03 00:01:36.985311 | orchestrator | 00:01:36.985 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.985444 | orchestrator | 00:01:36.985 STDOUT terraform:  + protocol = "icmp" 2025-09-03 00:01:36.985586 | orchestrator | 00:01:36.985 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.985638 | orchestrator | 00:01:36.985 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-03 00:01:36.985697 | orchestrator | 00:01:36.985 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-03 00:01:36.985793 | orchestrator | 00:01:36.985 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-03 00:01:36.985844 | orchestrator | 00:01:36.985 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-03 00:01:36.986009 | orchestrator | 00:01:36.985 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-03 00:01:36.986032 | orchestrator | 00:01:36.985 STDOUT terraform:  } 2025-09-03 00:01:36.986156 | orchestrator | 00:01:36.985 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-03 00:01:36.986395 | orchestrator | 00:01:36.986 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-03 00:01:36.986457 | orchestrator | 00:01:36.986 STDOUT terraform:  + direction = "ingress" 2025-09-03 00:01:36.986612 | orchestrator | 00:01:36.986 STDOUT terraform:  + ethertype = "IPv4" 2025-09-03 00:01:36.986650 | orchestrator | 00:01:36.986 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.986812 | orchestrator | 00:01:36.986 STDOUT terraform:  + protocol = "tcp" 2025-09-03 00:01:36.986942 | orchestrator | 00:01:36.986 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.987036 | orchestrator | 00:01:36.986 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-03 00:01:36.987101 | orchestrator | 00:01:36.987 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-03 00:01:36.987179 | orchestrator | 00:01:36.987 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-03 00:01:36.987276 | orchestrator | 00:01:36.987 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-03 00:01:36.987388 | orchestrator | 00:01:36.987 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-03 00:01:36.987482 | orchestrator | 00:01:36.987 STDOUT terraform:  } 2025-09-03 00:01:36.987607 | orchestrator | 00:01:36.987 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-03 00:01:36.987860 | orchestrator | 00:01:36.987 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-03 00:01:36.988013 | orchestrator | 00:01:36.987 STDOUT terraform:  + direction = "ingress" 2025-09-03 00:01:36.988129 | orchestrator | 00:01:36.987 STDOUT terraform:  + ethertype = "IPv4" 2025-09-03 00:01:36.988175 | orchestrator | 00:01:36.988 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.988320 | orchestrator | 00:01:36.988 STDOUT terraform:  + protocol = "udp" 2025-09-03 00:01:36.988456 | orchestrator | 00:01:36.988 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.988504 | orchestrator | 00:01:36.988 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-03 00:01:36.988783 | orchestrator | 00:01:36.988 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-03 00:01:36.988902 | orchestrator | 00:01:36.988 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-03 00:01:36.988946 | orchestrator | 00:01:36.988 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-03 00:01:36.989060 | orchestrator | 00:01:36.988 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-03 00:01:36.989097 | orchestrator | 00:01:36.989 STDOUT terraform:  } 2025-09-03 00:01:36.989214 | orchestrator | 00:01:36.989 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-03 00:01:36.989325 | orchestrator | 00:01:36.989 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-03 00:01:36.989490 | orchestrator | 00:01:36.989 STDOUT terraform:  + direction = "ingress" 2025-09-03 00:01:36.989562 | orchestrator | 00:01:36.989 STDOUT terraform:  + ethertype = "IPv4" 2025-09-03 00:01:36.989716 | orchestrator | 00:01:36.989 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.989791 | orchestrator | 00:01:36.989 STDOUT terraform:  + protocol = "icmp" 2025-09-03 00:01:36.989877 | orchestrator | 00:01:36.989 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.990135 | orchestrator | 00:01:36.989 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-03 00:01:36.990170 | orchestrator | 00:01:36.990 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-03 00:01:36.990230 | orchestrator | 00:01:36.990 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-03 00:01:36.990313 | orchestrator | 00:01:36.990 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-03 00:01:36.990391 | orchestrator | 00:01:36.990 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-03 00:01:36.990450 | orchestrator | 00:01:36.990 STDOUT terraform:  } 2025-09-03 00:01:36.990516 | orchestrator | 00:01:36.990 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-03 00:01:36.990687 | orchestrator | 00:01:36.990 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-03 00:01:36.990723 | orchestrator | 00:01:36.990 STDOUT terraform:  + description = "vrrp" 2025-09-03 00:01:36.990795 | orchestrator | 00:01:36.990 STDOUT terraform:  + direction = "ingress" 2025-09-03 00:01:36.990879 | orchestrator | 00:01:36.990 STDOUT terraform:  + ethertype = "IPv4" 2025-09-03 00:01:36.994043 | orchestrator | 00:01:36.990 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.994118 | orchestrator | 00:01:36.993 STDOUT terraform:  + protocol = "112" 2025-09-03 00:01:36.994189 | orchestrator | 00:01:36.994 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.994259 | orchestrator | 00:01:36.994 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-03 00:01:36.994449 | orchestrator | 00:01:36.994 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-03 00:01:36.994542 | orchestrator | 00:01:36.994 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-03 00:01:36.994653 | orchestrator | 00:01:36.994 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-03 00:01:36.994771 | orchestrator | 00:01:36.994 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-03 00:01:36.994824 | orchestrator | 00:01:36.994 STDOUT terraform:  } 2025-09-03 00:01:36.994994 | orchestrator | 00:01:36.994 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-03 00:01:36.995282 | orchestrator | 00:01:36.995 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-03 00:01:36.995288 | orchestrator | 00:01:36.995 STDOUT terraform:  + all_tags = (known after apply) 2025-09-03 00:01:36.995383 | orchestrator | 00:01:36.995 STDOUT terraform:  + description = "management security group" 2025-09-03 00:01:36.995481 | orchestrator | 00:01:36.995 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.995488 | orchestrator | 00:01:36.995 STDOUT terraform:  + name = "testbed-management" 2025-09-03 00:01:36.995630 | orchestrator | 00:01:36.995 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.995741 | orchestrator | 00:01:36.995 STDOUT terraform:  + stateful = (known after apply) 2025-09-03 00:01:36.995827 | orchestrator | 00:01:36.995 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-03 00:01:36.995958 | orchestrator | 00:01:36.995 STDOUT terraform:  } 2025-09-03 00:01:36.996430 | orchestrator | 00:01:36.995 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-03 00:01:36.996760 | orchestrator | 00:01:36.996 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-03 00:01:36.997025 | orchestrator | 00:01:36.996 STDOUT terraform:  + all_tags = (known after apply) 2025-09-03 00:01:36.997166 | orchestrator | 00:01:36.996 STDOUT terraform:  + description = "node security group" 2025-09-03 00:01:36.997392 | orchestrator | 00:01:36.997 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:36.997494 | orchestrator | 00:01:36.997 STDOUT terraform:  + name = "testbed-node" 2025-09-03 00:01:36.997563 | orchestrator | 00:01:36.997 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:36.997821 | orchestrator | 00:01:36.997 STDOUT terraform:  + stateful = (known after apply) 2025-09-03 00:01:36.998006 | orchestrator | 00:01:36.997 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-03 00:01:36.998153 | orchestrator | 00:01:36.998 STDOUT terraform:  } 2025-09-03 00:01:37.000837 | orchestrator | 00:01:36.998 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-03 00:01:37.000881 | orchestrator | 00:01:36.998 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-03 00:01:37.000896 | orchestrator | 00:01:36.998 STDOUT terraform:  + all_tags = (known after apply) 2025-09-03 00:01:37.000901 | orchestrator | 00:01:36.998 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-03 00:01:37.000905 | orchestrator | 00:01:36.998 STDOUT terraform:  + dns_nameservers = [ 2025-09-03 00:01:37.000909 | orchestrator | 00:01:36.998 STDOUT terraform:  + "8.8.8.8", 2025-09-03 00:01:37.000913 | orchestrator | 00:01:36.998 STDOUT terraform:  + "9.9.9.9", 2025-09-03 00:01:37.000957 | orchestrator | 00:01:36.998 STDOUT terraform:  ] 2025-09-03 00:01:37.000962 | orchestrator | 00:01:36.998 STDOUT terraform:  + enable_dhcp = true 2025-09-03 00:01:37.000985 | orchestrator | 00:01:36.998 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-03 00:01:37.000990 | orchestrator | 00:01:36.998 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:37.000994 | orchestrator | 00:01:36.998 STDOUT terraform:  + ip_version = 4 2025-09-03 00:01:37.000998 | orchestrator | 00:01:36.998 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-03 00:01:37.001002 | orchestrator | 00:01:36.998 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-03 00:01:37.001006 | orchestrator | 00:01:36.998 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-03 00:01:37.001010 | orchestrator | 00:01:36.998 STDOUT terraform:  + network_id = (known after apply) 2025-09-03 00:01:37.001013 | orchestrator | 00:01:36.998 STDOUT terraform:  + no_gateway = false 2025-09-03 00:01:37.001028 | orchestrator | 00:01:36.998 STDOUT terraform:  + region = (known after apply) 2025-09-03 00:01:37.001041 | orchestrator | 00:01:36.998 STDOUT terraform:  + service_types = (known after apply) 2025-09-03 00:01:37.001046 | orchestrator | 00:01:36.998 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-03 00:01:37.001060 | orchestrator | 00:01:36.998 STDOUT terraform:  + allocation_pool { 2025-09-03 00:01:37.001064 | orchestrator | 00:01:36.998 STDOUT terraform:  + end = "192.168.31.250" 2025-09-03 00:01:37.001068 | orchestrator | 00:01:36.998 STDOUT terraform:  + start = "192.168.31.200" 2025-09-03 00:01:37.001083 | orchestrator | 00:01:36.998 STDOUT terraform:  } 2025-09-03 00:01:37.001088 | orchestrator | 00:01:36.998 STDOUT terraform:  } 2025-09-03 00:01:37.001103 | orchestrator | 00:01:36.998 STDOUT terraform:  # terraform_data.image will be created 2025-09-03 00:01:37.001107 | orchestrator | 00:01:36.999 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-03 00:01:37.001111 | orchestrator | 00:01:36.999 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:37.001152 | orchestrator | 00:01:36.999 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-03 00:01:37.001158 | orchestrator | 00:01:36.999 STDOUT terraform:  + output = (known after apply) 2025-09-03 00:01:37.001162 | orchestrator | 00:01:36.999 STDOUT terraform:  } 2025-09-03 00:01:37.001219 | orchestrator | 00:01:36.999 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-03 00:01:37.001227 | orchestrator | 00:01:36.999 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-03 00:01:37.001232 | orchestrator | 00:01:36.999 STDOUT terraform:  + id = (known after apply) 2025-09-03 00:01:37.001235 | orchestrator | 00:01:36.999 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-03 00:01:37.001282 | orchestrator | 00:01:36.999 STDOUT terraform:  + output = (known after apply) 2025-09-03 00:01:37.001315 | orchestrator | 00:01:36.999 STDOUT terraform:  } 2025-09-03 00:01:37.001320 | orchestrator | 00:01:36.999 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-03 00:01:37.001324 | orchestrator | 00:01:36.999 STDOUT terraform: Changes to Outputs: 2025-09-03 00:01:37.001393 | orchestrator | 00:01:36.999 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-03 00:01:37.001513 | orchestrator | 00:01:36.999 STDOUT terraform:  + private_key = (sensitive value) 2025-09-03 00:01:37.074796 | orchestrator | 00:01:37.074 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-03 00:01:37.204779 | orchestrator | 00:01:37.204 STDOUT terraform: terraform_data.image: Creating... 2025-09-03 00:01:37.204866 | orchestrator | 00:01:37.204 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=d048d977-b6c1-641b-4dec-91057396559e] 2025-09-03 00:01:37.204876 | orchestrator | 00:01:37.204 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=76fdf33f-364e-50e2-7dda-0cc222b801dd] 2025-09-03 00:01:37.234148 | orchestrator | 00:01:37.233 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-03 00:01:37.242738 | orchestrator | 00:01:37.242 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-03 00:01:37.243271 | orchestrator | 00:01:37.243 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-03 00:01:37.244374 | orchestrator | 00:01:37.244 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-03 00:01:37.244854 | orchestrator | 00:01:37.244 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-03 00:01:37.252046 | orchestrator | 00:01:37.251 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-03 00:01:37.252351 | orchestrator | 00:01:37.252 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-03 00:01:37.252496 | orchestrator | 00:01:37.252 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-03 00:01:37.262070 | orchestrator | 00:01:37.261 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-03 00:01:37.263460 | orchestrator | 00:01:37.263 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-03 00:01:37.689727 | orchestrator | 00:01:37.689 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-03 00:01:37.695934 | orchestrator | 00:01:37.695 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-03 00:01:37.697113 | orchestrator | 00:01:37.697 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-03 00:01:37.707849 | orchestrator | 00:01:37.707 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-03 00:01:37.760903 | orchestrator | 00:01:37.760 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-09-03 00:01:37.765909 | orchestrator | 00:01:37.765 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-03 00:01:38.262919 | orchestrator | 00:01:38.262 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=b55adba2-2e36-4f68-9037-31de49a790cd] 2025-09-03 00:01:38.268932 | orchestrator | 00:01:38.267 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-03 00:01:40.894917 | orchestrator | 00:01:40.892 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=7512b390-1fa3-4840-9943-7c6482fdb145] 2025-09-03 00:01:40.901606 | orchestrator | 00:01:40.900 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-03 00:01:40.914977 | orchestrator | 00:01:40.914 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=2aa4af3c-ac98-453f-b557-6d0c203c4201] 2025-09-03 00:01:40.919935 | orchestrator | 00:01:40.919 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=ce19fbd3-6a41-4577-8f91-9183654abf8c] 2025-09-03 00:01:40.922215 | orchestrator | 00:01:40.922 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-03 00:01:40.933070 | orchestrator | 00:01:40.932 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-03 00:01:40.934254 | orchestrator | 00:01:40.934 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=d4852aea-51af-4111-8e77-3990a105da37] 2025-09-03 00:01:40.940071 | orchestrator | 00:01:40.939 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=f4ffaa61-7d7a-4b4d-ae66-bf9c1470deb3] 2025-09-03 00:01:40.940158 | orchestrator | 00:01:40.940 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=409307c9-8e7f-483b-a404-5462fce46233] 2025-09-03 00:01:40.942371 | orchestrator | 00:01:40.942 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-03 00:01:40.944829 | orchestrator | 00:01:40.944 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-03 00:01:40.953286 | orchestrator | 00:01:40.953 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-03 00:01:40.991584 | orchestrator | 00:01:40.991 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=e885087e-46ab-46e4-825b-bdcddcbfdff8] 2025-09-03 00:01:41.001645 | orchestrator | 00:01:41.001 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=89937d38-622a-4519-a70d-71f9b6cc380e] 2025-09-03 00:01:41.008620 | orchestrator | 00:01:41.007 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-03 00:01:41.015641 | orchestrator | 00:01:41.015 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=ad6690c5d7d15a0199ef166b145a9b88af1c9c0d] 2025-09-03 00:01:41.017410 | orchestrator | 00:01:41.017 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-03 00:01:41.018350 | orchestrator | 00:01:41.018 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=9ba28649-84e7-4d30-a12b-e93c6e95fbcd] 2025-09-03 00:01:41.022349 | orchestrator | 00:01:41.022 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-03 00:01:41.023880 | orchestrator | 00:01:41.023 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=70894e6ecb3920da3d3cd6a734a57b619429c743] 2025-09-03 00:01:41.620483 | orchestrator | 00:01:41.620 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=8a8d7701-fa3c-42c5-9179-39e748f0f96d] 2025-09-03 00:01:42.036630 | orchestrator | 00:01:42.036 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=24d98006-2660-4468-8165-9b68e8a5ae58] 2025-09-03 00:01:42.048305 | orchestrator | 00:01:42.047 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-03 00:01:44.307244 | orchestrator | 00:01:44.306 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=5cc3399e-7952-4fd6-9ff6-a2b0255266c3] 2025-09-03 00:01:44.692110 | orchestrator | 00:01:44.346 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=e8670175-54bd-41b5-bd3c-dd9ea44e7b4a] 2025-09-03 00:01:44.692211 | orchestrator | 00:01:44.374 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=b569a853-28fc-48aa-b8cb-a5321e1a853d] 2025-09-03 00:01:44.692224 | orchestrator | 00:01:44.398 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=c9bc7981-e388-467f-b59a-2076c31d0343] 2025-09-03 00:01:44.692235 | orchestrator | 00:01:44.403 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=c87564b4-441b-42f3-97de-587e6061c3ae] 2025-09-03 00:01:44.692246 | orchestrator | 00:01:44.416 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=fc54dbc2-85fa-4a7d-8bd9-52ff930caf77] 2025-09-03 00:01:45.147206 | orchestrator | 00:01:45.146 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=d0b2890a-f37d-4e93-a664-8f70f2d05abc] 2025-09-03 00:01:45.154531 | orchestrator | 00:01:45.154 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-03 00:01:45.154621 | orchestrator | 00:01:45.154 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-03 00:01:45.155010 | orchestrator | 00:01:45.154 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-03 00:01:45.346173 | orchestrator | 00:01:45.345 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=aaa87136-de87-441e-ba2e-2846e942eef3] 2025-09-03 00:01:45.356378 | orchestrator | 00:01:45.356 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-03 00:01:45.356539 | orchestrator | 00:01:45.356 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-03 00:01:45.356814 | orchestrator | 00:01:45.356 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-03 00:01:45.359491 | orchestrator | 00:01:45.359 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-03 00:01:45.364060 | orchestrator | 00:01:45.363 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-03 00:01:45.364393 | orchestrator | 00:01:45.364 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-03 00:01:45.384996 | orchestrator | 00:01:45.384 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=9d2eaf04-9e30-4631-9b0e-b5ec3a4f8992] 2025-09-03 00:01:45.389460 | orchestrator | 00:01:45.389 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-03 00:01:45.395352 | orchestrator | 00:01:45.395 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-03 00:01:45.397095 | orchestrator | 00:01:45.396 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-03 00:01:45.518162 | orchestrator | 00:01:45.517 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=a1786420-4ccc-4c6c-a3ab-dccd515dd360] 2025-09-03 00:01:45.527887 | orchestrator | 00:01:45.527 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-03 00:01:45.596883 | orchestrator | 00:01:45.596 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=cdf068cf-9eea-416b-9202-cb302b97350d] 2025-09-03 00:01:45.610155 | orchestrator | 00:01:45.609 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-03 00:01:46.023562 | orchestrator | 00:01:46.023 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=fe3bf9e8-8907-4b44-b5ec-63905344372a] 2025-09-03 00:01:46.036288 | orchestrator | 00:01:46.036 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-03 00:01:46.047193 | orchestrator | 00:01:46.047 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=4510c842-e136-4424-8168-4a02ef44a7a7] 2025-09-03 00:01:46.058136 | orchestrator | 00:01:46.057 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-03 00:01:46.109299 | orchestrator | 00:01:46.109 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=b1f0eb82-cc66-4cc9-9322-0dbffce72a65] 2025-09-03 00:01:46.114350 | orchestrator | 00:01:46.114 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-03 00:01:46.229593 | orchestrator | 00:01:46.229 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 0s [id=b1a56d0d-6359-4d6b-9147-e670893768a8] 2025-09-03 00:01:46.236833 | orchestrator | 00:01:46.236 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-03 00:01:46.307218 | orchestrator | 00:01:46.306 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=de360596-adbe-483f-8506-e9beb7f33883] 2025-09-03 00:01:46.313783 | orchestrator | 00:01:46.313 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-03 00:01:46.331559 | orchestrator | 00:01:46.331 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=013f90f5-6a30-492e-a4f4-e68e3eaecbca] 2025-09-03 00:01:46.487668 | orchestrator | 00:01:46.487 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=954451d9-4da3-4f7e-a33a-dbb738210d57] 2025-09-03 00:01:46.501311 | orchestrator | 00:01:46.501 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 2s [id=78887093-bc6b-4b32-afc1-43d1b9e9d7e3] 2025-09-03 00:01:46.555924 | orchestrator | 00:01:46.555 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 2s [id=d34bf0cd-4c03-4a54-9965-def67cf287bf] 2025-09-03 00:01:46.702006 | orchestrator | 00:01:46.701 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=f1699e53-0f31-4c69-9849-52f86ddce7e7] 2025-09-03 00:01:46.708225 | orchestrator | 00:01:46.707 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=05a0a2bc-6765-47cf-a3b9-152c6b5ef52c] 2025-09-03 00:01:46.718004 | orchestrator | 00:01:46.717 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 2s [id=bd1742fc-7fdd-4fa5-bfc6-3788f2db1b70] 2025-09-03 00:01:46.763843 | orchestrator | 00:01:46.763 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=b2159b41-0941-4c03-9a78-b2a2a2937b89] 2025-09-03 00:01:47.318770 | orchestrator | 00:01:47.318 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=c24bc52a-c65e-449e-910d-4d4d426acfcd] 2025-09-03 00:01:48.294743 | orchestrator | 00:01:48.294 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=8c2e2a73-6dc7-4255-ba15-76644b0ba951] 2025-09-03 00:01:48.323239 | orchestrator | 00:01:48.323 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-03 00:01:48.327259 | orchestrator | 00:01:48.327 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-03 00:01:48.335529 | orchestrator | 00:01:48.335 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-03 00:01:48.337253 | orchestrator | 00:01:48.337 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-03 00:01:48.345070 | orchestrator | 00:01:48.344 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-03 00:01:48.355280 | orchestrator | 00:01:48.355 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-03 00:01:48.355329 | orchestrator | 00:01:48.355 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-03 00:01:49.834000 | orchestrator | 00:01:49.833 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=ecdc25f0-ecb8-4ee7-89a0-d3fe32c443a6] 2025-09-03 00:01:49.846847 | orchestrator | 00:01:49.846 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-03 00:01:49.862685 | orchestrator | 00:01:49.862 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-03 00:01:49.862864 | orchestrator | 00:01:49.862 STDOUT terraform: local_file.inventory: Creating... 2025-09-03 00:01:49.869557 | orchestrator | 00:01:49.869 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=abdb09fe1334c132bb2d2c60f134507217fbe41a] 2025-09-03 00:01:49.870762 | orchestrator | 00:01:49.870 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=355ccca735e4c593458b52eb7737d90c842eb3aa] 2025-09-03 00:01:50.645192 | orchestrator | 00:01:50.644 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=ecdc25f0-ecb8-4ee7-89a0-d3fe32c443a6] 2025-09-03 00:01:58.325505 | orchestrator | 00:01:58.325 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-03 00:01:58.340690 | orchestrator | 00:01:58.340 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-03 00:01:58.340799 | orchestrator | 00:01:58.340 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-03 00:01:58.353923 | orchestrator | 00:01:58.353 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-03 00:01:58.355170 | orchestrator | 00:01:58.354 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-03 00:01:58.355366 | orchestrator | 00:01:58.355 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-03 00:02:08.328096 | orchestrator | 00:02:08.327 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-03 00:02:08.341315 | orchestrator | 00:02:08.341 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-03 00:02:08.341363 | orchestrator | 00:02:08.341 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-03 00:02:08.354616 | orchestrator | 00:02:08.354 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-03 00:02:08.355799 | orchestrator | 00:02:08.355 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-03 00:02:08.355876 | orchestrator | 00:02:08.355 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-03 00:02:08.811398 | orchestrator | 00:02:08.810 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=360b77b1-b4bd-4e9f-bacd-5164d93d1769] 2025-09-03 00:02:08.838310 | orchestrator | 00:02:08.837 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=eb8600c9-2526-440f-834e-9263de67a126] 2025-09-03 00:02:08.883082 | orchestrator | 00:02:08.882 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=fe569d12-ad8e-4ada-8478-9d34e31bb8af] 2025-09-03 00:02:18.331936 | orchestrator | 00:02:18.331 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-09-03 00:02:18.342184 | orchestrator | 00:02:18.341 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-09-03 00:02:18.356287 | orchestrator | 00:02:18.356 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-09-03 00:02:18.895942 | orchestrator | 00:02:18.895 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=7e94d2f1-e4f3-49de-96ec-a0c93b949c88] 2025-09-03 00:02:18.996933 | orchestrator | 00:02:18.996 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=b740e54f-3a54-40ee-a06d-2799eeb4b5a3] 2025-09-03 00:02:19.043727 | orchestrator | 00:02:19.043 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=dc752b00-8903-41be-9ed1-87e3f2a007f1] 2025-09-03 00:02:19.059105 | orchestrator | 00:02:19.058 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-03 00:02:19.070484 | orchestrator | 00:02:19.070 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=6490780050283191428] 2025-09-03 00:02:19.084929 | orchestrator | 00:02:19.084 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-03 00:02:19.090242 | orchestrator | 00:02:19.090 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-03 00:02:19.090651 | orchestrator | 00:02:19.090 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-03 00:02:19.092451 | orchestrator | 00:02:19.092 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-03 00:02:19.094244 | orchestrator | 00:02:19.094 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-03 00:02:19.103526 | orchestrator | 00:02:19.103 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-03 00:02:19.103586 | orchestrator | 00:02:19.103 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-03 00:02:19.108168 | orchestrator | 00:02:19.108 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-03 00:02:19.116260 | orchestrator | 00:02:19.115 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-03 00:02:19.116880 | orchestrator | 00:02:19.116 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-03 00:02:22.500041 | orchestrator | 00:02:22.499 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=dc752b00-8903-41be-9ed1-87e3f2a007f1/d4852aea-51af-4111-8e77-3990a105da37] 2025-09-03 00:02:22.521044 | orchestrator | 00:02:22.520 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=b740e54f-3a54-40ee-a06d-2799eeb4b5a3/2aa4af3c-ac98-453f-b557-6d0c203c4201] 2025-09-03 00:02:22.528454 | orchestrator | 00:02:22.527 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=dc752b00-8903-41be-9ed1-87e3f2a007f1/ce19fbd3-6a41-4577-8f91-9183654abf8c] 2025-09-03 00:02:22.552298 | orchestrator | 00:02:22.552 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=fe569d12-ad8e-4ada-8478-9d34e31bb8af/e885087e-46ab-46e4-825b-bdcddcbfdff8] 2025-09-03 00:02:22.560914 | orchestrator | 00:02:22.560 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=b740e54f-3a54-40ee-a06d-2799eeb4b5a3/89937d38-622a-4519-a70d-71f9b6cc380e] 2025-09-03 00:02:22.578917 | orchestrator | 00:02:22.578 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=fe569d12-ad8e-4ada-8478-9d34e31bb8af/7512b390-1fa3-4840-9943-7c6482fdb145] 2025-09-03 00:02:22.601231 | orchestrator | 00:02:22.600 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=b740e54f-3a54-40ee-a06d-2799eeb4b5a3/f4ffaa61-7d7a-4b4d-ae66-bf9c1470deb3] 2025-09-03 00:02:22.615520 | orchestrator | 00:02:22.615 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=fe569d12-ad8e-4ada-8478-9d34e31bb8af/9ba28649-84e7-4d30-a12b-e93c6e95fbcd] 2025-09-03 00:02:28.627881 | orchestrator | 00:02:28.627 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=dc752b00-8903-41be-9ed1-87e3f2a007f1/409307c9-8e7f-483b-a404-5462fce46233] 2025-09-03 00:02:29.100390 | orchestrator | 00:02:29.100 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-03 00:02:39.100920 | orchestrator | 00:02:39.100 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-03 00:02:39.859882 | orchestrator | 00:02:39.859 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=2d18b20e-f576-450b-b9d3-c1050a4a2e69] 2025-09-03 00:02:39.886092 | orchestrator | 00:02:39.885 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-03 00:02:39.886186 | orchestrator | 00:02:39.885 STDOUT terraform: Outputs: 2025-09-03 00:02:39.886216 | orchestrator | 00:02:39.885 STDOUT terraform: manager_address = 2025-09-03 00:02:39.886229 | orchestrator | 00:02:39.886 STDOUT terraform: private_key = 2025-09-03 00:02:40.237317 | orchestrator | ok: Runtime: 0:01:09.499219 2025-09-03 00:02:40.299346 | 2025-09-03 00:02:40.299895 | TASK [Create infrastructure (stable)] 2025-09-03 00:02:40.841093 | orchestrator | skipping: Conditional result was False 2025-09-03 00:02:40.859538 | 2025-09-03 00:02:40.859704 | TASK [Fetch manager address] 2025-09-03 00:02:41.283549 | orchestrator | ok 2025-09-03 00:02:41.294329 | 2025-09-03 00:02:41.294451 | TASK [Set manager_host address] 2025-09-03 00:02:41.396491 | orchestrator | ok 2025-09-03 00:02:41.406089 | 2025-09-03 00:02:41.406220 | LOOP [Update ansible collections] 2025-09-03 00:02:42.249049 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-03 00:02:42.249392 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-03 00:02:42.249448 | orchestrator | Starting galaxy collection install process 2025-09-03 00:02:42.249483 | orchestrator | Process install dependency map 2025-09-03 00:02:42.249515 | orchestrator | Starting collection install process 2025-09-03 00:02:42.249543 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2025-09-03 00:02:42.249574 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2025-09-03 00:02:42.249609 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-03 00:02:42.249711 | orchestrator | ok: Item: commons Runtime: 0:00:00.515991 2025-09-03 00:02:43.088416 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-03 00:02:43.089099 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-03 00:02:43.089191 | orchestrator | Starting galaxy collection install process 2025-09-03 00:02:43.089871 | orchestrator | Process install dependency map 2025-09-03 00:02:43.090189 | orchestrator | Starting collection install process 2025-09-03 00:02:43.090455 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2025-09-03 00:02:43.090516 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2025-09-03 00:02:43.091062 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-03 00:02:43.091410 | orchestrator | ok: Item: services Runtime: 0:00:00.580509 2025-09-03 00:02:43.111410 | 2025-09-03 00:02:43.111517 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-03 00:02:53.654998 | orchestrator | ok 2025-09-03 00:02:53.667690 | 2025-09-03 00:02:53.667823 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-03 00:03:53.721872 | orchestrator | ok 2025-09-03 00:03:53.736354 | 2025-09-03 00:03:53.736513 | TASK [Fetch manager ssh hostkey] 2025-09-03 00:03:55.310926 | orchestrator | Output suppressed because no_log was given 2025-09-03 00:03:55.326852 | 2025-09-03 00:03:55.327206 | TASK [Get ssh keypair from terraform environment] 2025-09-03 00:03:55.865217 | orchestrator | ok: Runtime: 0:00:00.011091 2025-09-03 00:03:55.882349 | 2025-09-03 00:03:55.882531 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-03 00:03:55.929847 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-03 00:03:55.939871 | 2025-09-03 00:03:55.940008 | TASK [Run manager part 0] 2025-09-03 00:03:56.762857 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-03 00:03:56.806404 | orchestrator | 2025-09-03 00:03:56.806448 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-03 00:03:56.806456 | orchestrator | 2025-09-03 00:03:56.806469 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-03 00:03:58.444382 | orchestrator | ok: [testbed-manager] 2025-09-03 00:03:58.444494 | orchestrator | 2025-09-03 00:03:58.444548 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-03 00:03:58.444572 | orchestrator | 2025-09-03 00:03:58.444593 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-03 00:04:00.278270 | orchestrator | ok: [testbed-manager] 2025-09-03 00:04:00.278415 | orchestrator | 2025-09-03 00:04:00.278429 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-03 00:04:00.915217 | orchestrator | ok: [testbed-manager] 2025-09-03 00:04:00.915307 | orchestrator | 2025-09-03 00:04:00.915325 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-03 00:04:00.965166 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:04:00.965221 | orchestrator | 2025-09-03 00:04:00.965230 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-03 00:04:00.993416 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:04:00.993468 | orchestrator | 2025-09-03 00:04:00.993475 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-03 00:04:01.020033 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:04:01.020077 | orchestrator | 2025-09-03 00:04:01.020083 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-03 00:04:01.045060 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:04:01.045099 | orchestrator | 2025-09-03 00:04:01.045105 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-03 00:04:01.070504 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:04:01.070541 | orchestrator | 2025-09-03 00:04:01.070548 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-03 00:04:01.103382 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:04:01.103432 | orchestrator | 2025-09-03 00:04:01.103440 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-03 00:04:01.135280 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:04:01.135317 | orchestrator | 2025-09-03 00:04:01.135324 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-03 00:04:01.905777 | orchestrator | changed: [testbed-manager] 2025-09-03 00:04:01.905876 | orchestrator | 2025-09-03 00:04:01.905892 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-03 00:06:24.433060 | orchestrator | changed: [testbed-manager] 2025-09-03 00:06:24.433151 | orchestrator | 2025-09-03 00:06:24.433166 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-03 00:07:39.299389 | orchestrator | changed: [testbed-manager] 2025-09-03 00:07:39.299494 | orchestrator | 2025-09-03 00:07:39.299511 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-03 00:07:59.332772 | orchestrator | changed: [testbed-manager] 2025-09-03 00:07:59.332807 | orchestrator | 2025-09-03 00:07:59.332817 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-03 00:08:07.807120 | orchestrator | changed: [testbed-manager] 2025-09-03 00:08:07.807193 | orchestrator | 2025-09-03 00:08:07.807208 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-03 00:08:07.855574 | orchestrator | ok: [testbed-manager] 2025-09-03 00:08:07.855612 | orchestrator | 2025-09-03 00:08:07.855625 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-03 00:08:08.623014 | orchestrator | ok: [testbed-manager] 2025-09-03 00:08:08.623095 | orchestrator | 2025-09-03 00:08:08.623113 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-03 00:08:09.363271 | orchestrator | changed: [testbed-manager] 2025-09-03 00:08:09.363375 | orchestrator | 2025-09-03 00:08:09.363392 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-03 00:08:15.532212 | orchestrator | changed: [testbed-manager] 2025-09-03 00:08:15.532266 | orchestrator | 2025-09-03 00:08:15.532287 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-03 00:08:21.294810 | orchestrator | changed: [testbed-manager] 2025-09-03 00:08:21.294850 | orchestrator | 2025-09-03 00:08:21.294859 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-03 00:08:23.971813 | orchestrator | changed: [testbed-manager] 2025-09-03 00:08:23.971863 | orchestrator | 2025-09-03 00:08:23.971872 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-03 00:08:25.660141 | orchestrator | changed: [testbed-manager] 2025-09-03 00:08:25.660183 | orchestrator | 2025-09-03 00:08:25.660192 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-03 00:08:26.751881 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-03 00:08:26.751999 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-03 00:08:26.752015 | orchestrator | 2025-09-03 00:08:26.752028 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-03 00:08:26.795324 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-03 00:08:26.795380 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-03 00:08:26.795386 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-03 00:08:26.795391 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-03 00:08:30.017600 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-03 00:08:30.017690 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-03 00:08:30.017704 | orchestrator | 2025-09-03 00:08:30.017716 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-03 00:08:30.550990 | orchestrator | changed: [testbed-manager] 2025-09-03 00:08:30.551075 | orchestrator | 2025-09-03 00:08:30.551092 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-03 00:09:50.923795 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-03 00:09:50.925220 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-03 00:09:50.925237 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-03 00:09:50.925247 | orchestrator | 2025-09-03 00:09:50.925258 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-03 00:09:53.187332 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-03 00:09:53.187423 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-03 00:09:53.187438 | orchestrator | 2025-09-03 00:09:53.187451 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-03 00:09:53.187464 | orchestrator | 2025-09-03 00:09:53.187475 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-03 00:09:54.569528 | orchestrator | ok: [testbed-manager] 2025-09-03 00:09:54.569619 | orchestrator | 2025-09-03 00:09:54.569639 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-03 00:09:54.617051 | orchestrator | ok: [testbed-manager] 2025-09-03 00:09:54.617115 | orchestrator | 2025-09-03 00:09:54.617129 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-03 00:09:54.680124 | orchestrator | ok: [testbed-manager] 2025-09-03 00:09:54.680187 | orchestrator | 2025-09-03 00:09:54.680202 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-03 00:09:55.419461 | orchestrator | changed: [testbed-manager] 2025-09-03 00:09:55.419551 | orchestrator | 2025-09-03 00:09:55.419568 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-03 00:09:56.135203 | orchestrator | changed: [testbed-manager] 2025-09-03 00:09:56.135333 | orchestrator | 2025-09-03 00:09:56.135351 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-03 00:09:57.502505 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-03 00:09:57.502598 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-03 00:09:57.502614 | orchestrator | 2025-09-03 00:09:57.502643 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-03 00:09:58.892905 | orchestrator | changed: [testbed-manager] 2025-09-03 00:09:58.893026 | orchestrator | 2025-09-03 00:09:58.893043 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-03 00:10:00.622598 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-03 00:10:00.622698 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-03 00:10:00.622712 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-03 00:10:00.622724 | orchestrator | 2025-09-03 00:10:00.622737 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-03 00:10:00.683771 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:10:00.683850 | orchestrator | 2025-09-03 00:10:00.683866 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-03 00:10:01.233668 | orchestrator | changed: [testbed-manager] 2025-09-03 00:10:01.233773 | orchestrator | 2025-09-03 00:10:01.233794 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-03 00:10:01.303948 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:10:01.304005 | orchestrator | 2025-09-03 00:10:01.304012 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-03 00:10:02.129954 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-03 00:10:02.130118 | orchestrator | changed: [testbed-manager] 2025-09-03 00:10:02.130136 | orchestrator | 2025-09-03 00:10:02.130149 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-03 00:10:02.171126 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:10:02.171186 | orchestrator | 2025-09-03 00:10:02.171196 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-03 00:10:02.211857 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:10:02.211920 | orchestrator | 2025-09-03 00:10:02.211937 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-03 00:10:02.251148 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:10:02.251212 | orchestrator | 2025-09-03 00:10:02.251226 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-03 00:10:02.299399 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:10:02.299472 | orchestrator | 2025-09-03 00:10:02.299492 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-03 00:10:02.975203 | orchestrator | ok: [testbed-manager] 2025-09-03 00:10:02.975322 | orchestrator | 2025-09-03 00:10:02.975340 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-03 00:10:02.975353 | orchestrator | 2025-09-03 00:10:02.975365 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-03 00:10:04.305040 | orchestrator | ok: [testbed-manager] 2025-09-03 00:10:04.305122 | orchestrator | 2025-09-03 00:10:04.305137 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-03 00:10:05.239919 | orchestrator | changed: [testbed-manager] 2025-09-03 00:10:05.240007 | orchestrator | 2025-09-03 00:10:05.240024 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:10:05.240037 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-03 00:10:05.240048 | orchestrator | 2025-09-03 00:10:05.692355 | orchestrator | ok: Runtime: 0:06:09.085378 2025-09-03 00:10:05.711753 | 2025-09-03 00:10:05.711918 | TASK [Point out that the log in on the manager is now possible] 2025-09-03 00:10:05.760646 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-03 00:10:05.770777 | 2025-09-03 00:10:05.770952 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-03 00:10:05.817202 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-03 00:10:05.827290 | 2025-09-03 00:10:05.827444 | TASK [Run manager part 1 + 2] 2025-09-03 00:10:06.571272 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-03 00:10:06.615028 | orchestrator | 2025-09-03 00:10:06.615067 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-03 00:10:06.615074 | orchestrator | 2025-09-03 00:10:06.615085 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-03 00:10:09.424395 | orchestrator | ok: [testbed-manager] 2025-09-03 00:10:09.424442 | orchestrator | 2025-09-03 00:10:09.424464 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-03 00:10:09.464942 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:10:09.464985 | orchestrator | 2025-09-03 00:10:09.464997 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-03 00:10:09.498442 | orchestrator | ok: [testbed-manager] 2025-09-03 00:10:09.498470 | orchestrator | 2025-09-03 00:10:09.498478 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-03 00:10:09.535064 | orchestrator | ok: [testbed-manager] 2025-09-03 00:10:09.535097 | orchestrator | 2025-09-03 00:10:09.535107 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-03 00:10:09.592471 | orchestrator | ok: [testbed-manager] 2025-09-03 00:10:09.592502 | orchestrator | 2025-09-03 00:10:09.592511 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-03 00:10:09.644950 | orchestrator | ok: [testbed-manager] 2025-09-03 00:10:09.644980 | orchestrator | 2025-09-03 00:10:09.644989 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-03 00:10:09.697955 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-03 00:10:09.697977 | orchestrator | 2025-09-03 00:10:09.697982 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-03 00:10:10.380196 | orchestrator | ok: [testbed-manager] 2025-09-03 00:10:10.380267 | orchestrator | 2025-09-03 00:10:10.380282 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-03 00:10:10.427810 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:10:10.427853 | orchestrator | 2025-09-03 00:10:10.427861 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-03 00:10:11.739346 | orchestrator | changed: [testbed-manager] 2025-09-03 00:10:11.739402 | orchestrator | 2025-09-03 00:10:11.739412 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-03 00:10:12.298722 | orchestrator | ok: [testbed-manager] 2025-09-03 00:10:12.298779 | orchestrator | 2025-09-03 00:10:12.298789 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-03 00:10:13.410575 | orchestrator | changed: [testbed-manager] 2025-09-03 00:10:13.410633 | orchestrator | 2025-09-03 00:10:13.410648 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-03 00:10:30.759700 | orchestrator | changed: [testbed-manager] 2025-09-03 00:10:30.759846 | orchestrator | 2025-09-03 00:10:30.759863 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-03 00:10:31.432796 | orchestrator | ok: [testbed-manager] 2025-09-03 00:10:31.432886 | orchestrator | 2025-09-03 00:10:31.432904 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-03 00:10:31.486683 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:10:31.486740 | orchestrator | 2025-09-03 00:10:31.486754 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-03 00:10:32.437012 | orchestrator | changed: [testbed-manager] 2025-09-03 00:10:32.437099 | orchestrator | 2025-09-03 00:10:32.437115 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-03 00:10:33.370987 | orchestrator | changed: [testbed-manager] 2025-09-03 00:10:33.371065 | orchestrator | 2025-09-03 00:10:33.371080 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-03 00:10:33.954098 | orchestrator | changed: [testbed-manager] 2025-09-03 00:10:33.954187 | orchestrator | 2025-09-03 00:10:33.954203 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-03 00:10:33.996231 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-03 00:10:33.996329 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-03 00:10:33.996341 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-03 00:10:33.996350 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-03 00:10:35.957796 | orchestrator | changed: [testbed-manager] 2025-09-03 00:10:35.957895 | orchestrator | 2025-09-03 00:10:35.957911 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-03 00:10:44.990652 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-03 00:10:44.990748 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-03 00:10:44.990766 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-03 00:10:44.990779 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-03 00:10:44.990797 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-03 00:10:44.990808 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-03 00:10:44.990820 | orchestrator | 2025-09-03 00:10:44.990833 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-03 00:10:46.006662 | orchestrator | changed: [testbed-manager] 2025-09-03 00:10:46.006753 | orchestrator | 2025-09-03 00:10:46.006769 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-03 00:10:46.052759 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:10:46.052839 | orchestrator | 2025-09-03 00:10:46.052857 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-03 00:10:50.222235 | orchestrator | changed: [testbed-manager] 2025-09-03 00:10:50.222352 | orchestrator | 2025-09-03 00:10:50.222370 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-03 00:10:50.266101 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:10:50.266162 | orchestrator | 2025-09-03 00:10:50.266177 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-03 00:12:26.386139 | orchestrator | changed: [testbed-manager] 2025-09-03 00:12:26.386245 | orchestrator | 2025-09-03 00:12:26.386265 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-03 00:12:27.475344 | orchestrator | ok: [testbed-manager] 2025-09-03 00:12:27.475429 | orchestrator | 2025-09-03 00:12:27.475443 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:12:27.475455 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-03 00:12:27.475465 | orchestrator | 2025-09-03 00:12:27.965120 | orchestrator | ok: Runtime: 0:02:21.454778 2025-09-03 00:12:27.981769 | 2025-09-03 00:12:27.981907 | TASK [Reboot manager] 2025-09-03 00:12:29.517759 | orchestrator | ok: Runtime: 0:00:00.943935 2025-09-03 00:12:29.535207 | 2025-09-03 00:12:29.535391 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-03 00:12:44.835316 | orchestrator | ok 2025-09-03 00:12:44.846801 | 2025-09-03 00:12:44.846951 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-03 00:13:44.887250 | orchestrator | ok 2025-09-03 00:13:44.897808 | 2025-09-03 00:13:44.897950 | TASK [Deploy manager + bootstrap nodes] 2025-09-03 00:13:47.400804 | orchestrator | 2025-09-03 00:13:47.401046 | orchestrator | # DEPLOY MANAGER 2025-09-03 00:13:47.401071 | orchestrator | 2025-09-03 00:13:47.401086 | orchestrator | + set -e 2025-09-03 00:13:47.401099 | orchestrator | + echo 2025-09-03 00:13:47.401113 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-03 00:13:47.401130 | orchestrator | + echo 2025-09-03 00:13:47.401182 | orchestrator | + cat /opt/manager-vars.sh 2025-09-03 00:13:47.404710 | orchestrator | export NUMBER_OF_NODES=6 2025-09-03 00:13:47.404741 | orchestrator | 2025-09-03 00:13:47.404753 | orchestrator | export CEPH_VERSION=reef 2025-09-03 00:13:47.404766 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-03 00:13:47.404778 | orchestrator | export MANAGER_VERSION=latest 2025-09-03 00:13:47.404800 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-03 00:13:47.404811 | orchestrator | 2025-09-03 00:13:47.404830 | orchestrator | export ARA=false 2025-09-03 00:13:47.404841 | orchestrator | export DEPLOY_MODE=manager 2025-09-03 00:13:47.404886 | orchestrator | export TEMPEST=true 2025-09-03 00:13:47.404901 | orchestrator | export IS_ZUUL=true 2025-09-03 00:13:47.404912 | orchestrator | 2025-09-03 00:13:47.404930 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.254 2025-09-03 00:13:47.404942 | orchestrator | export EXTERNAL_API=false 2025-09-03 00:13:47.404953 | orchestrator | 2025-09-03 00:13:47.404964 | orchestrator | export IMAGE_USER=ubuntu 2025-09-03 00:13:47.404978 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-03 00:13:47.404988 | orchestrator | 2025-09-03 00:13:47.404999 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-03 00:13:47.405017 | orchestrator | 2025-09-03 00:13:47.405028 | orchestrator | + echo 2025-09-03 00:13:47.405041 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-03 00:13:47.406418 | orchestrator | ++ export INTERACTIVE=false 2025-09-03 00:13:47.406439 | orchestrator | ++ INTERACTIVE=false 2025-09-03 00:13:47.406479 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-03 00:13:47.406492 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-03 00:13:47.406519 | orchestrator | + source /opt/manager-vars.sh 2025-09-03 00:13:47.406531 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-03 00:13:47.406542 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-03 00:13:47.406582 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-03 00:13:47.406663 | orchestrator | ++ CEPH_VERSION=reef 2025-09-03 00:13:47.406679 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-03 00:13:47.406716 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-03 00:13:47.406727 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-03 00:13:47.406738 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-03 00:13:47.406785 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-03 00:13:47.406808 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-03 00:13:47.406819 | orchestrator | ++ export ARA=false 2025-09-03 00:13:47.406851 | orchestrator | ++ ARA=false 2025-09-03 00:13:47.406863 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-03 00:13:47.406874 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-03 00:13:47.406885 | orchestrator | ++ export TEMPEST=true 2025-09-03 00:13:47.406896 | orchestrator | ++ TEMPEST=true 2025-09-03 00:13:47.406907 | orchestrator | ++ export IS_ZUUL=true 2025-09-03 00:13:47.406917 | orchestrator | ++ IS_ZUUL=true 2025-09-03 00:13:47.406928 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.254 2025-09-03 00:13:47.406939 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.254 2025-09-03 00:13:47.406950 | orchestrator | ++ export EXTERNAL_API=false 2025-09-03 00:13:47.406961 | orchestrator | ++ EXTERNAL_API=false 2025-09-03 00:13:47.406972 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-03 00:13:47.406982 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-03 00:13:47.406997 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-03 00:13:47.407009 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-03 00:13:47.407020 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-03 00:13:47.407031 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-03 00:13:47.407042 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-03 00:13:47.464340 | orchestrator | + docker version 2025-09-03 00:13:47.724524 | orchestrator | Client: Docker Engine - Community 2025-09-03 00:13:47.724609 | orchestrator | Version: 27.5.1 2025-09-03 00:13:47.724622 | orchestrator | API version: 1.47 2025-09-03 00:13:47.724635 | orchestrator | Go version: go1.22.11 2025-09-03 00:13:47.724646 | orchestrator | Git commit: 9f9e405 2025-09-03 00:13:47.724656 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-03 00:13:47.724668 | orchestrator | OS/Arch: linux/amd64 2025-09-03 00:13:47.724679 | orchestrator | Context: default 2025-09-03 00:13:47.724690 | orchestrator | 2025-09-03 00:13:47.724701 | orchestrator | Server: Docker Engine - Community 2025-09-03 00:13:47.724712 | orchestrator | Engine: 2025-09-03 00:13:47.724723 | orchestrator | Version: 27.5.1 2025-09-03 00:13:47.724735 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-03 00:13:47.724771 | orchestrator | Go version: go1.22.11 2025-09-03 00:13:47.724783 | orchestrator | Git commit: 4c9b3b0 2025-09-03 00:13:47.724794 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-03 00:13:47.724805 | orchestrator | OS/Arch: linux/amd64 2025-09-03 00:13:47.724815 | orchestrator | Experimental: false 2025-09-03 00:13:47.724826 | orchestrator | containerd: 2025-09-03 00:13:47.724837 | orchestrator | Version: 1.7.27 2025-09-03 00:13:47.724848 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-09-03 00:13:47.724860 | orchestrator | runc: 2025-09-03 00:13:47.724880 | orchestrator | Version: 1.2.5 2025-09-03 00:13:47.724892 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-09-03 00:13:47.724903 | orchestrator | docker-init: 2025-09-03 00:13:47.724914 | orchestrator | Version: 0.19.0 2025-09-03 00:13:47.724925 | orchestrator | GitCommit: de40ad0 2025-09-03 00:13:47.728681 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-03 00:13:47.738585 | orchestrator | + set -e 2025-09-03 00:13:47.738608 | orchestrator | + source /opt/manager-vars.sh 2025-09-03 00:13:47.738670 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-03 00:13:47.738683 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-03 00:13:47.738694 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-03 00:13:47.738705 | orchestrator | ++ CEPH_VERSION=reef 2025-09-03 00:13:47.738716 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-03 00:13:47.738743 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-03 00:13:47.738754 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-03 00:13:47.738765 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-03 00:13:47.738781 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-03 00:13:47.738792 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-03 00:13:47.738803 | orchestrator | ++ export ARA=false 2025-09-03 00:13:47.738814 | orchestrator | ++ ARA=false 2025-09-03 00:13:47.738825 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-03 00:13:47.738836 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-03 00:13:47.738846 | orchestrator | ++ export TEMPEST=true 2025-09-03 00:13:47.738857 | orchestrator | ++ TEMPEST=true 2025-09-03 00:13:47.738867 | orchestrator | ++ export IS_ZUUL=true 2025-09-03 00:13:47.738878 | orchestrator | ++ IS_ZUUL=true 2025-09-03 00:13:47.739001 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.254 2025-09-03 00:13:47.739048 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.254 2025-09-03 00:13:47.739060 | orchestrator | ++ export EXTERNAL_API=false 2025-09-03 00:13:47.739071 | orchestrator | ++ EXTERNAL_API=false 2025-09-03 00:13:47.739095 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-03 00:13:47.739106 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-03 00:13:47.739117 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-03 00:13:47.739128 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-03 00:13:47.739140 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-03 00:13:47.739150 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-03 00:13:47.739162 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-03 00:13:47.739173 | orchestrator | ++ export INTERACTIVE=false 2025-09-03 00:13:47.739183 | orchestrator | ++ INTERACTIVE=false 2025-09-03 00:13:47.739194 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-03 00:13:47.739209 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-03 00:13:47.739225 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-03 00:13:47.739236 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-03 00:13:47.739272 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-09-03 00:13:47.746291 | orchestrator | + set -e 2025-09-03 00:13:47.746313 | orchestrator | + VERSION=reef 2025-09-03 00:13:47.747683 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-03 00:13:47.753641 | orchestrator | + [[ -n ceph_version: reef ]] 2025-09-03 00:13:47.753663 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-09-03 00:13:47.759644 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-09-03 00:13:47.766057 | orchestrator | + set -e 2025-09-03 00:13:47.766094 | orchestrator | + VERSION=2024.2 2025-09-03 00:13:47.766466 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-03 00:13:47.770385 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-09-03 00:13:47.770411 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-09-03 00:13:47.775943 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-03 00:13:47.776949 | orchestrator | ++ semver latest 7.0.0 2025-09-03 00:13:47.845872 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-03 00:13:47.845926 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-03 00:13:47.845941 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-03 00:13:47.845953 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-03 00:13:47.939724 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-03 00:13:47.943507 | orchestrator | + source /opt/venv/bin/activate 2025-09-03 00:13:47.944737 | orchestrator | ++ deactivate nondestructive 2025-09-03 00:13:47.944752 | orchestrator | ++ '[' -n '' ']' 2025-09-03 00:13:47.944798 | orchestrator | ++ '[' -n '' ']' 2025-09-03 00:13:47.944808 | orchestrator | ++ hash -r 2025-09-03 00:13:47.944816 | orchestrator | ++ '[' -n '' ']' 2025-09-03 00:13:47.944823 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-03 00:13:47.944864 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-03 00:13:47.944874 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-03 00:13:47.945309 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-03 00:13:47.945331 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-03 00:13:47.945339 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-03 00:13:47.945347 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-03 00:13:47.945366 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-03 00:13:47.945378 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-03 00:13:47.945388 | orchestrator | ++ export PATH 2025-09-03 00:13:47.945396 | orchestrator | ++ '[' -n '' ']' 2025-09-03 00:13:47.945403 | orchestrator | ++ '[' -z '' ']' 2025-09-03 00:13:47.945410 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-03 00:13:47.945418 | orchestrator | ++ PS1='(venv) ' 2025-09-03 00:13:47.945425 | orchestrator | ++ export PS1 2025-09-03 00:13:47.945432 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-03 00:13:47.945439 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-03 00:13:47.945449 | orchestrator | ++ hash -r 2025-09-03 00:13:47.945661 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-03 00:13:49.236800 | orchestrator | 2025-09-03 00:13:49.236919 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-03 00:13:49.236935 | orchestrator | 2025-09-03 00:13:49.236948 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-03 00:13:49.787982 | orchestrator | ok: [testbed-manager] 2025-09-03 00:13:49.788090 | orchestrator | 2025-09-03 00:13:49.788104 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-03 00:13:50.782851 | orchestrator | changed: [testbed-manager] 2025-09-03 00:13:50.782975 | orchestrator | 2025-09-03 00:13:50.782992 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-03 00:13:50.783005 | orchestrator | 2025-09-03 00:13:50.783016 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-03 00:13:53.193741 | orchestrator | ok: [testbed-manager] 2025-09-03 00:13:53.193857 | orchestrator | 2025-09-03 00:13:53.193873 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-03 00:13:53.251541 | orchestrator | ok: [testbed-manager] 2025-09-03 00:13:53.251587 | orchestrator | 2025-09-03 00:13:53.251603 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-03 00:13:53.713484 | orchestrator | changed: [testbed-manager] 2025-09-03 00:13:53.713589 | orchestrator | 2025-09-03 00:13:53.713610 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-03 00:13:53.757572 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:13:53.757634 | orchestrator | 2025-09-03 00:13:53.757646 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-03 00:13:54.097017 | orchestrator | changed: [testbed-manager] 2025-09-03 00:13:54.097120 | orchestrator | 2025-09-03 00:13:54.097135 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-03 00:13:54.152967 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:13:54.153014 | orchestrator | 2025-09-03 00:13:54.153027 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-03 00:13:54.493545 | orchestrator | ok: [testbed-manager] 2025-09-03 00:13:54.493652 | orchestrator | 2025-09-03 00:13:54.493668 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-03 00:13:54.634137 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:13:54.634227 | orchestrator | 2025-09-03 00:13:54.634239 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-03 00:13:54.634301 | orchestrator | 2025-09-03 00:13:54.634314 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-03 00:13:57.353066 | orchestrator | ok: [testbed-manager] 2025-09-03 00:13:57.353189 | orchestrator | 2025-09-03 00:13:57.353206 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-03 00:13:57.458098 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-03 00:13:57.458164 | orchestrator | 2025-09-03 00:13:57.458178 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-03 00:13:57.511522 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-03 00:13:57.511552 | orchestrator | 2025-09-03 00:13:57.511564 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-03 00:13:58.574820 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-03 00:13:58.574932 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-03 00:13:58.574948 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-03 00:13:58.574959 | orchestrator | 2025-09-03 00:13:58.574972 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-03 00:14:00.427956 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-03 00:14:00.428073 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-03 00:14:00.428092 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-03 00:14:00.428104 | orchestrator | 2025-09-03 00:14:00.428117 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-03 00:14:01.080752 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-03 00:14:01.080860 | orchestrator | changed: [testbed-manager] 2025-09-03 00:14:01.080877 | orchestrator | 2025-09-03 00:14:01.080891 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-03 00:14:01.750868 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-03 00:14:01.750973 | orchestrator | changed: [testbed-manager] 2025-09-03 00:14:01.750988 | orchestrator | 2025-09-03 00:14:01.751001 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-03 00:14:01.812760 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:14:01.812854 | orchestrator | 2025-09-03 00:14:01.812871 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-03 00:14:02.178868 | orchestrator | ok: [testbed-manager] 2025-09-03 00:14:02.178973 | orchestrator | 2025-09-03 00:14:02.178988 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-03 00:14:02.247021 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-03 00:14:02.247097 | orchestrator | 2025-09-03 00:14:02.247110 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-03 00:14:03.325200 | orchestrator | changed: [testbed-manager] 2025-09-03 00:14:03.325351 | orchestrator | 2025-09-03 00:14:03.325367 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-03 00:14:04.134099 | orchestrator | changed: [testbed-manager] 2025-09-03 00:14:04.134207 | orchestrator | 2025-09-03 00:14:04.134222 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-03 00:14:18.075166 | orchestrator | changed: [testbed-manager] 2025-09-03 00:14:18.075292 | orchestrator | 2025-09-03 00:14:18.075305 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-03 00:14:18.143855 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:14:18.143879 | orchestrator | 2025-09-03 00:14:18.143889 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-03 00:14:18.143898 | orchestrator | 2025-09-03 00:14:18.143907 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-03 00:14:19.892190 | orchestrator | ok: [testbed-manager] 2025-09-03 00:14:19.951183 | orchestrator | 2025-09-03 00:14:19.951308 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-03 00:14:19.986716 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-03 00:14:19.986835 | orchestrator | 2025-09-03 00:14:19.986851 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-03 00:14:20.038358 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-03 00:14:20.038420 | orchestrator | 2025-09-03 00:14:20.038437 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-03 00:14:22.422062 | orchestrator | ok: [testbed-manager] 2025-09-03 00:14:22.422173 | orchestrator | 2025-09-03 00:14:22.422185 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-03 00:14:22.479373 | orchestrator | ok: [testbed-manager] 2025-09-03 00:14:22.479406 | orchestrator | 2025-09-03 00:14:22.479418 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-03 00:14:22.605815 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-03 00:14:22.605862 | orchestrator | 2025-09-03 00:14:22.605871 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-03 00:14:25.443489 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-03 00:14:25.444325 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-03 00:14:25.444355 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-03 00:14:25.444369 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-03 00:14:25.444383 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-03 00:14:25.444393 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-03 00:14:25.444403 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-03 00:14:25.444413 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-03 00:14:25.444423 | orchestrator | 2025-09-03 00:14:25.444434 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-03 00:14:26.062707 | orchestrator | changed: [testbed-manager] 2025-09-03 00:14:26.062804 | orchestrator | 2025-09-03 00:14:26.062818 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-03 00:14:26.690221 | orchestrator | changed: [testbed-manager] 2025-09-03 00:14:26.690374 | orchestrator | 2025-09-03 00:14:26.690389 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-03 00:14:26.764102 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-03 00:14:26.764178 | orchestrator | 2025-09-03 00:14:26.764191 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-03 00:14:27.954871 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-03 00:14:27.954975 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-03 00:14:27.954989 | orchestrator | 2025-09-03 00:14:27.955002 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-03 00:14:28.573996 | orchestrator | changed: [testbed-manager] 2025-09-03 00:14:28.574178 | orchestrator | 2025-09-03 00:14:28.574194 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-03 00:14:28.630283 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:14:28.630360 | orchestrator | 2025-09-03 00:14:28.630373 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-03 00:14:28.709538 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-03 00:14:28.709592 | orchestrator | 2025-09-03 00:14:28.709608 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-03 00:14:29.322851 | orchestrator | changed: [testbed-manager] 2025-09-03 00:14:29.322951 | orchestrator | 2025-09-03 00:14:29.322966 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-03 00:14:29.386690 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-03 00:14:29.386771 | orchestrator | 2025-09-03 00:14:29.386788 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-03 00:14:30.719775 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-03 00:14:30.719882 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-03 00:14:30.719897 | orchestrator | changed: [testbed-manager] 2025-09-03 00:14:30.719911 | orchestrator | 2025-09-03 00:14:30.719923 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-03 00:14:31.336423 | orchestrator | changed: [testbed-manager] 2025-09-03 00:14:31.336525 | orchestrator | 2025-09-03 00:14:31.336538 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-03 00:14:31.388453 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:14:31.388474 | orchestrator | 2025-09-03 00:14:31.388485 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-03 00:14:31.484986 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-03 00:14:31.485086 | orchestrator | 2025-09-03 00:14:31.485101 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-03 00:14:32.020294 | orchestrator | changed: [testbed-manager] 2025-09-03 00:14:32.020403 | orchestrator | 2025-09-03 00:14:32.020418 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-03 00:14:32.422895 | orchestrator | changed: [testbed-manager] 2025-09-03 00:14:32.422992 | orchestrator | 2025-09-03 00:14:32.423009 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-03 00:14:33.628159 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-03 00:14:33.628313 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-03 00:14:33.628329 | orchestrator | 2025-09-03 00:14:33.628341 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-03 00:14:34.247219 | orchestrator | changed: [testbed-manager] 2025-09-03 00:14:34.247367 | orchestrator | 2025-09-03 00:14:34.247383 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-03 00:14:34.648443 | orchestrator | ok: [testbed-manager] 2025-09-03 00:14:34.648551 | orchestrator | 2025-09-03 00:14:34.648567 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-03 00:14:35.002728 | orchestrator | changed: [testbed-manager] 2025-09-03 00:14:35.002830 | orchestrator | 2025-09-03 00:14:35.002845 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-03 00:14:35.052852 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:14:35.052874 | orchestrator | 2025-09-03 00:14:35.052886 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-03 00:14:35.129835 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-03 00:14:35.129935 | orchestrator | 2025-09-03 00:14:35.129951 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-03 00:14:35.172997 | orchestrator | ok: [testbed-manager] 2025-09-03 00:14:35.173084 | orchestrator | 2025-09-03 00:14:35.173093 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-03 00:14:36.990838 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-03 00:14:36.990958 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-03 00:14:36.990973 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-03 00:14:36.990985 | orchestrator | 2025-09-03 00:14:36.990997 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-03 00:14:37.629115 | orchestrator | changed: [testbed-manager] 2025-09-03 00:14:37.629231 | orchestrator | 2025-09-03 00:14:37.629300 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-03 00:14:38.287301 | orchestrator | changed: [testbed-manager] 2025-09-03 00:14:38.287407 | orchestrator | 2025-09-03 00:14:38.287423 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-03 00:14:38.927308 | orchestrator | changed: [testbed-manager] 2025-09-03 00:14:38.927416 | orchestrator | 2025-09-03 00:14:38.927431 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-03 00:14:38.995791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-03 00:14:38.995838 | orchestrator | 2025-09-03 00:14:38.995850 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-03 00:14:39.034864 | orchestrator | ok: [testbed-manager] 2025-09-03 00:14:39.034898 | orchestrator | 2025-09-03 00:14:39.034910 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-03 00:14:39.687477 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-03 00:14:39.687576 | orchestrator | 2025-09-03 00:14:39.687590 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-03 00:14:39.772460 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-03 00:14:39.772518 | orchestrator | 2025-09-03 00:14:39.772531 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-03 00:14:40.462321 | orchestrator | changed: [testbed-manager] 2025-09-03 00:14:40.462402 | orchestrator | 2025-09-03 00:14:40.462410 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-03 00:14:41.013625 | orchestrator | ok: [testbed-manager] 2025-09-03 00:14:41.013719 | orchestrator | 2025-09-03 00:14:41.013733 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-03 00:14:41.070996 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:14:41.071022 | orchestrator | 2025-09-03 00:14:41.071034 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-03 00:14:41.124995 | orchestrator | ok: [testbed-manager] 2025-09-03 00:14:41.125043 | orchestrator | 2025-09-03 00:14:41.125060 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-03 00:14:41.917213 | orchestrator | changed: [testbed-manager] 2025-09-03 00:14:41.917389 | orchestrator | 2025-09-03 00:14:41.917404 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-03 00:16:15.643414 | orchestrator | changed: [testbed-manager] 2025-09-03 00:16:15.643562 | orchestrator | 2025-09-03 00:16:15.643580 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-03 00:16:16.669549 | orchestrator | ok: [testbed-manager] 2025-09-03 00:16:16.669677 | orchestrator | 2025-09-03 00:16:16.669694 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-03 00:16:16.728078 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:16:16.728189 | orchestrator | 2025-09-03 00:16:16.728207 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-03 00:16:45.761184 | orchestrator | changed: [testbed-manager] 2025-09-03 00:16:45.761372 | orchestrator | 2025-09-03 00:16:45.761390 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-03 00:16:45.826659 | orchestrator | ok: [testbed-manager] 2025-09-03 00:16:45.826742 | orchestrator | 2025-09-03 00:16:45.826762 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-03 00:16:45.826784 | orchestrator | 2025-09-03 00:16:45.826804 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-03 00:16:45.872815 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:16:45.872904 | orchestrator | 2025-09-03 00:16:45.872919 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-03 00:17:45.938120 | orchestrator | Pausing for 60 seconds 2025-09-03 00:17:45.938309 | orchestrator | changed: [testbed-manager] 2025-09-03 00:17:45.938337 | orchestrator | 2025-09-03 00:17:45.938360 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-03 00:17:50.510362 | orchestrator | changed: [testbed-manager] 2025-09-03 00:17:50.510477 | orchestrator | 2025-09-03 00:17:50.510496 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-03 00:18:32.128346 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-03 00:18:32.128479 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-03 00:18:32.128496 | orchestrator | changed: [testbed-manager] 2025-09-03 00:18:32.128540 | orchestrator | 2025-09-03 00:18:32.128553 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-03 00:18:41.431796 | orchestrator | changed: [testbed-manager] 2025-09-03 00:18:41.431936 | orchestrator | 2025-09-03 00:18:41.431953 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-03 00:18:41.520069 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-03 00:18:41.520153 | orchestrator | 2025-09-03 00:18:41.520168 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-03 00:18:41.520181 | orchestrator | 2025-09-03 00:18:41.520193 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-03 00:18:41.568054 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:18:41.568120 | orchestrator | 2025-09-03 00:18:41.568134 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:18:41.568148 | orchestrator | testbed-manager : ok=66 changed=36 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-09-03 00:18:41.568160 | orchestrator | 2025-09-03 00:18:41.636633 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-03 00:18:41.636691 | orchestrator | + deactivate 2025-09-03 00:18:41.636705 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-03 00:18:41.636719 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-03 00:18:41.636730 | orchestrator | + export PATH 2025-09-03 00:18:41.636741 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-03 00:18:41.636753 | orchestrator | + '[' -n '' ']' 2025-09-03 00:18:41.636764 | orchestrator | + hash -r 2025-09-03 00:18:41.636801 | orchestrator | + '[' -n '' ']' 2025-09-03 00:18:41.636813 | orchestrator | + unset VIRTUAL_ENV 2025-09-03 00:18:41.636824 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-03 00:18:41.636835 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-03 00:18:41.636846 | orchestrator | + unset -f deactivate 2025-09-03 00:18:41.636858 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-03 00:18:41.641361 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-03 00:18:41.641384 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-03 00:18:41.641395 | orchestrator | + local max_attempts=60 2025-09-03 00:18:41.641407 | orchestrator | + local name=ceph-ansible 2025-09-03 00:18:41.641418 | orchestrator | + local attempt_num=1 2025-09-03 00:18:41.641965 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-03 00:18:41.678594 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-03 00:18:41.678686 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-03 00:18:41.678701 | orchestrator | + local max_attempts=60 2025-09-03 00:18:41.678714 | orchestrator | + local name=kolla-ansible 2025-09-03 00:18:41.678725 | orchestrator | + local attempt_num=1 2025-09-03 00:18:41.679312 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-03 00:18:41.703957 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-03 00:18:41.704007 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-03 00:18:41.704021 | orchestrator | + local max_attempts=60 2025-09-03 00:18:41.704033 | orchestrator | + local name=osism-ansible 2025-09-03 00:18:41.704044 | orchestrator | + local attempt_num=1 2025-09-03 00:18:41.704253 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-03 00:18:41.729488 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-03 00:18:41.729543 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-03 00:18:41.729557 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-03 00:18:42.349881 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-03 00:18:42.539892 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-03 00:18:42.540012 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-09-03 00:18:42.540028 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-09-03 00:18:42.540071 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-09-03 00:18:42.540086 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-09-03 00:18:42.540109 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-09-03 00:18:42.540121 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-09-03 00:18:42.540132 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-09-03 00:18:42.540143 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-09-03 00:18:42.540154 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-09-03 00:18:42.540165 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-09-03 00:18:42.540176 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-09-03 00:18:42.540187 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-09-03 00:18:42.540198 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-09-03 00:18:42.540209 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-09-03 00:18:42.540272 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-09-03 00:18:42.545635 | orchestrator | ++ semver latest 7.0.0 2025-09-03 00:18:42.602599 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-03 00:18:42.602648 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-03 00:18:42.602663 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-03 00:18:42.607385 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-03 00:18:54.546909 | orchestrator | 2025-09-03 00:18:54 | INFO  | Task a3cd6cc5-2f50-44cb-a7ec-2d6a09142faf (resolvconf) was prepared for execution. 2025-09-03 00:18:54.547058 | orchestrator | 2025-09-03 00:18:54 | INFO  | It takes a moment until task a3cd6cc5-2f50-44cb-a7ec-2d6a09142faf (resolvconf) has been started and output is visible here. 2025-09-03 00:19:08.803836 | orchestrator | 2025-09-03 00:19:08.803977 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-03 00:19:08.803992 | orchestrator | 2025-09-03 00:19:08.804002 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-03 00:19:08.804044 | orchestrator | Wednesday 03 September 2025 00:18:58 +0000 (0:00:00.144) 0:00:00.144 *** 2025-09-03 00:19:08.804054 | orchestrator | ok: [testbed-manager] 2025-09-03 00:19:08.804065 | orchestrator | 2025-09-03 00:19:08.804074 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-03 00:19:08.804084 | orchestrator | Wednesday 03 September 2025 00:19:03 +0000 (0:00:04.680) 0:00:04.824 *** 2025-09-03 00:19:08.804093 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:19:08.804102 | orchestrator | 2025-09-03 00:19:08.804111 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-03 00:19:08.804120 | orchestrator | Wednesday 03 September 2025 00:19:03 +0000 (0:00:00.067) 0:00:04.891 *** 2025-09-03 00:19:08.804129 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-03 00:19:08.804139 | orchestrator | 2025-09-03 00:19:08.804148 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-03 00:19:08.804157 | orchestrator | Wednesday 03 September 2025 00:19:03 +0000 (0:00:00.075) 0:00:04.966 *** 2025-09-03 00:19:08.804165 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-03 00:19:08.804174 | orchestrator | 2025-09-03 00:19:08.804183 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-03 00:19:08.804192 | orchestrator | Wednesday 03 September 2025 00:19:03 +0000 (0:00:00.070) 0:00:05.037 *** 2025-09-03 00:19:08.804200 | orchestrator | ok: [testbed-manager] 2025-09-03 00:19:08.804209 | orchestrator | 2025-09-03 00:19:08.804217 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-03 00:19:08.804226 | orchestrator | Wednesday 03 September 2025 00:19:04 +0000 (0:00:01.062) 0:00:06.099 *** 2025-09-03 00:19:08.804259 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:19:08.804268 | orchestrator | 2025-09-03 00:19:08.804277 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-03 00:19:08.804285 | orchestrator | Wednesday 03 September 2025 00:19:04 +0000 (0:00:00.057) 0:00:06.157 *** 2025-09-03 00:19:08.804294 | orchestrator | ok: [testbed-manager] 2025-09-03 00:19:08.804303 | orchestrator | 2025-09-03 00:19:08.804312 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-03 00:19:08.804322 | orchestrator | Wednesday 03 September 2025 00:19:04 +0000 (0:00:00.474) 0:00:06.632 *** 2025-09-03 00:19:08.804333 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:19:08.804343 | orchestrator | 2025-09-03 00:19:08.804354 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-03 00:19:08.804366 | orchestrator | Wednesday 03 September 2025 00:19:04 +0000 (0:00:00.080) 0:00:06.712 *** 2025-09-03 00:19:08.804376 | orchestrator | changed: [testbed-manager] 2025-09-03 00:19:08.804387 | orchestrator | 2025-09-03 00:19:08.804397 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-03 00:19:08.804407 | orchestrator | Wednesday 03 September 2025 00:19:05 +0000 (0:00:00.523) 0:00:07.235 *** 2025-09-03 00:19:08.804418 | orchestrator | changed: [testbed-manager] 2025-09-03 00:19:08.804428 | orchestrator | 2025-09-03 00:19:08.804439 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-03 00:19:08.804449 | orchestrator | Wednesday 03 September 2025 00:19:06 +0000 (0:00:01.053) 0:00:08.289 *** 2025-09-03 00:19:08.804460 | orchestrator | ok: [testbed-manager] 2025-09-03 00:19:08.804470 | orchestrator | 2025-09-03 00:19:08.804480 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-03 00:19:08.804490 | orchestrator | Wednesday 03 September 2025 00:19:07 +0000 (0:00:00.940) 0:00:09.229 *** 2025-09-03 00:19:08.804513 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-03 00:19:08.804530 | orchestrator | 2025-09-03 00:19:08.804541 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-03 00:19:08.804551 | orchestrator | Wednesday 03 September 2025 00:19:07 +0000 (0:00:00.074) 0:00:09.304 *** 2025-09-03 00:19:08.804561 | orchestrator | changed: [testbed-manager] 2025-09-03 00:19:08.804571 | orchestrator | 2025-09-03 00:19:08.804581 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:19:08.804593 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-03 00:19:08.804604 | orchestrator | 2025-09-03 00:19:08.804614 | orchestrator | 2025-09-03 00:19:08.804625 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:19:08.804635 | orchestrator | Wednesday 03 September 2025 00:19:08 +0000 (0:00:01.099) 0:00:10.404 *** 2025-09-03 00:19:08.804646 | orchestrator | =============================================================================== 2025-09-03 00:19:08.804656 | orchestrator | Gathering Facts --------------------------------------------------------- 4.68s 2025-09-03 00:19:08.804666 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.10s 2025-09-03 00:19:08.804676 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.06s 2025-09-03 00:19:08.804685 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.05s 2025-09-03 00:19:08.804694 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.94s 2025-09-03 00:19:08.804702 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.52s 2025-09-03 00:19:08.804727 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.47s 2025-09-03 00:19:08.804737 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-09-03 00:19:08.804745 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-09-03 00:19:08.804754 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2025-09-03 00:19:08.804763 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-09-03 00:19:08.804771 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-09-03 00:19:08.804780 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-09-03 00:19:09.063902 | orchestrator | + osism apply sshconfig 2025-09-03 00:19:21.009676 | orchestrator | 2025-09-03 00:19:21 | INFO  | Task 20803fa8-0a2a-4b2b-a521-bf3fe4084378 (sshconfig) was prepared for execution. 2025-09-03 00:19:21.009807 | orchestrator | 2025-09-03 00:19:21 | INFO  | It takes a moment until task 20803fa8-0a2a-4b2b-a521-bf3fe4084378 (sshconfig) has been started and output is visible here. 2025-09-03 00:19:32.530478 | orchestrator | 2025-09-03 00:19:32.530623 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-03 00:19:32.530640 | orchestrator | 2025-09-03 00:19:32.530653 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-03 00:19:32.530664 | orchestrator | Wednesday 03 September 2025 00:19:24 +0000 (0:00:00.158) 0:00:00.158 *** 2025-09-03 00:19:32.530676 | orchestrator | ok: [testbed-manager] 2025-09-03 00:19:32.530689 | orchestrator | 2025-09-03 00:19:32.530700 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-03 00:19:32.530711 | orchestrator | Wednesday 03 September 2025 00:19:25 +0000 (0:00:00.552) 0:00:00.711 *** 2025-09-03 00:19:32.530723 | orchestrator | changed: [testbed-manager] 2025-09-03 00:19:32.530735 | orchestrator | 2025-09-03 00:19:32.530746 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-03 00:19:32.530758 | orchestrator | Wednesday 03 September 2025 00:19:25 +0000 (0:00:00.486) 0:00:01.197 *** 2025-09-03 00:19:32.530770 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-03 00:19:32.530781 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-03 00:19:32.530825 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-03 00:19:32.530837 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-03 00:19:32.530848 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-03 00:19:32.530880 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-03 00:19:32.530896 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-03 00:19:32.530915 | orchestrator | 2025-09-03 00:19:32.530936 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-03 00:19:32.530955 | orchestrator | Wednesday 03 September 2025 00:19:31 +0000 (0:00:05.736) 0:00:06.934 *** 2025-09-03 00:19:32.530974 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:19:32.530992 | orchestrator | 2025-09-03 00:19:32.531011 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-03 00:19:32.531031 | orchestrator | Wednesday 03 September 2025 00:19:31 +0000 (0:00:00.054) 0:00:06.988 *** 2025-09-03 00:19:32.531050 | orchestrator | changed: [testbed-manager] 2025-09-03 00:19:32.531070 | orchestrator | 2025-09-03 00:19:32.531083 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:19:32.531098 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-03 00:19:32.531111 | orchestrator | 2025-09-03 00:19:32.531123 | orchestrator | 2025-09-03 00:19:32.531136 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:19:32.531149 | orchestrator | Wednesday 03 September 2025 00:19:32 +0000 (0:00:00.567) 0:00:07.556 *** 2025-09-03 00:19:32.531162 | orchestrator | =============================================================================== 2025-09-03 00:19:32.531175 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.74s 2025-09-03 00:19:32.531187 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.57s 2025-09-03 00:19:32.531200 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.55s 2025-09-03 00:19:32.531213 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2025-09-03 00:19:32.531225 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.05s 2025-09-03 00:19:32.813570 | orchestrator | + osism apply known-hosts 2025-09-03 00:19:44.756725 | orchestrator | 2025-09-03 00:19:44 | INFO  | Task 7cb46031-0556-4767-8256-b19d1f85ccd7 (known-hosts) was prepared for execution. 2025-09-03 00:19:44.756866 | orchestrator | 2025-09-03 00:19:44 | INFO  | It takes a moment until task 7cb46031-0556-4767-8256-b19d1f85ccd7 (known-hosts) has been started and output is visible here. 2025-09-03 00:20:01.007340 | orchestrator | 2025-09-03 00:20:01.007487 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-03 00:20:01.007505 | orchestrator | 2025-09-03 00:20:01.007519 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-03 00:20:01.007531 | orchestrator | Wednesday 03 September 2025 00:19:48 +0000 (0:00:00.179) 0:00:00.179 *** 2025-09-03 00:20:01.007543 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-03 00:20:01.007556 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-03 00:20:01.007567 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-03 00:20:01.007578 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-03 00:20:01.007589 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-03 00:20:01.007600 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-03 00:20:01.007610 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-03 00:20:01.007621 | orchestrator | 2025-09-03 00:20:01.007633 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-03 00:20:01.007645 | orchestrator | Wednesday 03 September 2025 00:19:54 +0000 (0:00:05.852) 0:00:06.031 *** 2025-09-03 00:20:01.007680 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-03 00:20:01.007694 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-03 00:20:01.007705 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-03 00:20:01.007716 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-03 00:20:01.007727 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-03 00:20:01.007750 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-03 00:20:01.007762 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-03 00:20:01.007773 | orchestrator | 2025-09-03 00:20:01.007784 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-03 00:20:01.007798 | orchestrator | Wednesday 03 September 2025 00:19:54 +0000 (0:00:00.163) 0:00:06.194 *** 2025-09-03 00:20:01.007812 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqeg1f6AmMetiZv8kyaJLSouAcwk2yKJ9GuPZi/l1Am) 2025-09-03 00:20:01.007832 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZfXIRMHArNh5eqXtnq6qoPbwpaAkhj/a+EgHSk1ir+BPd94XXlXiGBO42Dp3EB0gP7PttWw/Gjzmxf7oegn0ZTY9qhkZWc6y04WMl+iExTtwxw45/xvsOSOKI5V58og1YqXC0qrR5NIcJ/XunEkS/Zbb0+rPkeTUeNn9ft4OVahKvV9y1hgZrrOUgATEzrtH41L5tbCifaWOs37kr6/tu94CFVzCrGIt+Jt+EORT+RbYSH0gQueGLILOO6Zf8ruNusXl+ABWgu9IAlDEZERwNE8QPsoeNsQgYrg7H2kJH6S7TLxmkdmlmFWpq4xTUlFd+vc/+9HpZhjrMdk/2Cil+cUXoyODdZ5Xl9Rd9DJDYCJzi9srBOVxsO07NDYn/dPHrVxmowl2rtOBkVz6sM73pUeDWb2DJ7MEDf6uB/V7eMadvBPPrp/t5svLm0qtuVH5ryZkffJNjBry+c0emRXDicdAluIzKcHOyWCBydMox2/yUTSHD48ce+6LCVXn0Vg8=) 2025-09-03 00:20:01.007850 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAjatQP2wwcrFIPZJoRTpHlEoCPeyEjqe7SiUb1/n4AHLXKg4Uz8SCfP6z0iYWhFfTl0YrI9JCar7wESuXFzYOI=) 2025-09-03 00:20:01.007865 | orchestrator | 2025-09-03 00:20:01.007878 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-03 00:20:01.007892 | orchestrator | Wednesday 03 September 2025 00:19:55 +0000 (0:00:01.145) 0:00:07.340 *** 2025-09-03 00:20:01.007906 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC6q/bgSnCZ34IIOztlWHzHFdlNj4uN3OUw7CP/SoKWIVRQx8pNboUhK1L9EZWcub0KL1+3+yskWnWiqxuqaXog=) 2025-09-03 00:20:01.007920 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMFyGfanxtPkvfV/9SiHDBSBOKvize0uTucToU9la5Xc) 2025-09-03 00:20:01.007964 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0+iBJYVLtkTIT/gbDawgBPK7AkngRp8yl2ZDGXjSLRcUWLNB8PlS9xQH1ZRVnC4L66PopC+TVKLNK0Ll0pdfAW+ZubhmMg/GMSz3UY3uYIESJg6KM74xDvBcltdB/wB04a+Pxfcg5C/Cf5BU1QvKmzSo88+xn4I2C+1/InksAOohwt1340oq+JEH1iKT22SzMiDIPmzgDP9zbDYaZhQuZFWh6b32KR8SO7reISkMEe+677CV0kbzvt2s4v+z4gxiOszpP9oFQn+NdzGn/JrJugzkm9dXBSYzbpInc5giHc8EZYhFIBc+f8bc/cXUOUYLXzFlqSRqucMUsQSbo3yKs3nMXirllq85neHO3Q86pxb2dvO6JkzSoQydgpEDSNR89FvCASD3W9diZZz5MSEPA8jwVVU4o9zgoXatY8l5BmkzZjx90+aIybKHab/BKF9Ho+gVTM+ZfmebDY6izC4c1GhxUzZ4T14cmG7E+ZnY9h9RJVfe9DYoFgKRk0IGS2uE=) 2025-09-03 00:20:01.007987 | orchestrator | 2025-09-03 00:20:01.008002 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-03 00:20:01.008015 | orchestrator | Wednesday 03 September 2025 00:19:56 +0000 (0:00:01.065) 0:00:08.406 *** 2025-09-03 00:20:01.008028 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEHK8tkTRk2DnNQD4zAtlYXP236DRTaXzTF3+Ja9uCtaAXlmR6wk9FN2uRN5A76903j5e9w6gtej72Dp3Uphr5E=) 2025-09-03 00:20:01.008042 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCb8i8w6OF1S0p6FxMbDwXNO3Y4jZ1ahvHci8QW8NeRcEh0e8vfhcxuW5zk5bcUoEI9blZYrLtl8GfnB7fHwIchBVAgyDrwYnoY9rwMzKYguFj4Z9b/uey+P6lZ5mO104PiA+d2PL7amzHZltWxiqteAfj9bO0M5CtWA2VSUZU/5XsMHwEAaiow02iA0e6mLAR1f4UsjkZfJOPLUap+BfVnM8zJMku75OdRbzXH7m9PnLDmAVfJOUKkBp7C8ySQ39I6rAyR8NJmEMz4InOiZo5dCj7OfBYEMXoSO/dlsnFdDo5uYOjiGgirUnUORrsIk8cyszvm/9a1iY2VMMLTxpU499s9IPZleqnEl3rdSWqCE0jWIi5HeWhqDMzXPPJhud22BGM6DA6b3oUZWaN6/bmGo57DmOe7WJQHbxNbD5uFv141LsXbQYMQX1ZbT2TKkfd9Yfh3p5BHfYTtdAARECDKrw5JAkiMPBKUC5kbVc26jEyMaNsX7/NFjBowZQrXDTk=) 2025-09-03 00:20:01.008056 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAonTrACFc6dYmzJN8ASUXDW0Qa+En/rpR60SUAZPB/H) 2025-09-03 00:20:01.008069 | orchestrator | 2025-09-03 00:20:01.008083 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-03 00:20:01.008096 | orchestrator | Wednesday 03 September 2025 00:19:57 +0000 (0:00:01.031) 0:00:09.437 *** 2025-09-03 00:20:01.008109 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICrwIxsbfVGjEJ5qvbrBwOlZT7KXX2k+UinyhV5s/Fu5) 2025-09-03 00:20:01.008194 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrneAYgCpLr3B+D8c09qdJX3EgDmx0nJKM3qbF5RzhWxfmnl2+UVwSZlp5smz5xF4LA9mi0ObeM6n6Th9k49aKhs0M7kplR1GETpH3vjQgTXDehtycQqTZYMm3E6M1oojRfKd77iyglXlr+fZOt2OaXjdzUW1jY8g0ft+myww8crFqEDKSCH6JVivp4vUd5t3WT9NtEtf1ZxGeOKOUXy+1siZuBkC1pYBNZToQxm0fliuF2IMDbyP+Lvszo1Khtiy8mI+sOaWVAxcm9G9+5Pql2xu3ZyojjL0krTKJSAIbpiEKOmBJXuNYKx1NLVd0iEeM3sJXMNkkn5qVz1AhzF+S+jZv5iGA12iymqM4Yst5ZwLl808fA/RQBFIR6hdGKS2+cHviqpE/2xgmGe6ANA8MRrwd1AA0A70VjQcIqW2RmlFs3S8YLsjNt4EIDoxXAO25G6fdfBPxQhMsraGFoQ1ltk8jjX9tquyTMNbTb8+3PWs8q5xb6HfS9V5b8Rce4Y8=) 2025-09-03 00:20:01.008207 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLAzurBIWu2rHDoIxoV+G/MZht7nldIZFk/sl69Q4Ty4NiQyvYa8jl2cRdD660lqguQwqNNY6jtuUqIrhqa7gLA=) 2025-09-03 00:20:01.008218 | orchestrator | 2025-09-03 00:20:01.008229 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-03 00:20:01.008240 | orchestrator | Wednesday 03 September 2025 00:19:58 +0000 (0:00:01.031) 0:00:10.469 *** 2025-09-03 00:20:01.008251 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPR0W3EzwSz2xogiMNZambHm8xAniqhuYPN0jGLPgRyz) 2025-09-03 00:20:01.008262 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDb/Gz/SxFiyKmdrCG0zpNuNVqzG5IZK20q7xedFghSmjdccVn4ju0+rXgVzdfhz7zJ4Yj3gtXLV5rg0qwqZemDuT3d3T+tiC+qbKI5G1CAf7Tvdcb6L8kAdAnHD0p+Mdc87OQgik2t+wE94woSqBGm5Gp6jxa5Cwa5Z9ADX9yxyXVuLnKSgjuTrVNJNCquhb520LclZO3nIFvJsDPcxE/UqtTPtXUPPqtbPLRqqn9CjDPFS6glpgWK8TUfU8k2toF0/pWUFDZTvCkL9703iFTtpOEQ7tBMub5iTxYOSe4ElnEbLBhsaVn/QxtjdVfJwRQHzjhAoQNT8ZpNm2RPoH5L3Y5bWBXPFFtZEA9V2AcOsBa9Iggth2gSHjE8ZGgvNmbrSyN6OzIM+hK91pcg14aa9avIag7ZDi3Va/l8OlcwYIpWPnWZI0yOXQsmjEa3W4OEqTVXLuEpQLdZmzaAd6QKE6l24/0RrePCCFUt5MMx509DNYPogLqw0y/6xxV2fW0=) 2025-09-03 00:20:01.008274 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI5LaX+0/ANM86Vp+1z9qUYBZQEqBE0KH5pjM1srcVLJAUMARMYGqF06E8pb0H9fvejr2TyFM/PNkCLLPWRHMBk=) 2025-09-03 00:20:01.008310 | orchestrator | 2025-09-03 00:20:01.008322 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-03 00:20:01.008333 | orchestrator | Wednesday 03 September 2025 00:19:59 +0000 (0:00:01.047) 0:00:11.516 *** 2025-09-03 00:20:01.008352 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDogDWnfCkKQeu3ewtUJSPlUjsQGZrbTnIqiW9ys/cXUzxtOap+mnHFbhdYbIg1MawDuoJLfh2Ul1w7uiyDF+G0=) 2025-09-03 00:20:11.842924 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILMaeZCSorn2deCWj9JDM+qgjVlK97KuycHGIQR7VB+T) 2025-09-03 00:20:11.843045 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZebJYY8ccD3aZr2GxXyGSEMA5lk+hqR2cUOwW3ZcyfjvcmIcBR36wN41Qm354ZFoVkA4O0Ef09qqRr+rLB3Sc+bLsC+wrYEltb2RR1ww6seC/4Kwy8tUqGwkps04+0YOAYBFHmf1sNsxtZ/w9VHqIRh5o3jrG9fgtDDElFw/n0DCBQj7fTtRnJsCdHJ5j15LiQecj5hA47Singa8dcg1DpNROPnIqvU0QAS4b2XS1yur7QbiVG5NQPKUgb+MYfI6aCG8XJsmVrSYXrg6aIVEB7Woh5MMKQL+hB55GDD5aORzy/gGrqXXDf/ku2JZliwCaypgF0ly+Z5zIt/4tkSeUbl4B0WdgbY2EQi2hHbNzWC095UZZ3neDh4E2O1iNEWaMTxHq6tH35xZx5BeEUlvqLnCI1M+GCxEdTemw+WL7rNPVG0cVLLcEmaYpsH7/v1iQ86aHVIhzmZGdKm7tGDSELtlqfg8scRdsxfSboww4wVzQ3uDOAozYiWXKAW5fdlk=) 2025-09-03 00:20:11.843065 | orchestrator | 2025-09-03 00:20:11.843078 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-03 00:20:11.843091 | orchestrator | Wednesday 03 September 2025 00:20:00 +0000 (0:00:01.024) 0:00:12.541 *** 2025-09-03 00:20:11.843102 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClNm/1ViC7NnqYjIOtAMxkhkPq0+felzTK5cCXpip7bsnN73POy04dcfO3QVbVKa/wDNejWUhfXfR5H+1f08oCVXJFjNZIm7YCVnrNCO/6aj4n7k8BvoMHUIjmQhUW8r/oMgBY6AVJfuqk+ZKgpZcTi57N4vtuKeb8mKivw3eHRLU98e5fRJuAMegshvE9B9Ag7fJXMhsZRdl5ZTzBRvcHsBLnS175+nhoqiZen2jzpt3qCWhp3jIBU11h0QdEMbp0bJA4Bryr5PWJLNQ9vkVg78wNs9cj1Y0yijyynW1S+lUB3OIib5AvFPGMZL7FBcfkTPutQVpYfLtsAWHozT3u2krb/AYPu0c3zU26HEy5LGQOgbSMIE7lgwRpMYUSQ4npaNSeShl+Wxjq9flWRea81WZAGb4jj9NwbpSvRGGLmSysQBkcIEAMY8yFKJbqTOnV3myfxN8QDREy7WxNrkfQz5Mt2oca0gcWlBX/15lICO/f7kwfhnf1yFHSHIEPl6U=) 2025-09-03 00:20:11.843115 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHast/EmtDofE5wt8hACpPc6lfX+eruBdP5zzVgyXSNb8FnBFUWEo55Jh9h9rmwHxuABhCRPoqfQxiauFw8TdLE=) 2025-09-03 00:20:11.843128 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGaNfZBQuvTDoVBDQbgvWcSIdQVcn57dCeLh7GK60nUP) 2025-09-03 00:20:11.843139 | orchestrator | 2025-09-03 00:20:11.843151 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-03 00:20:11.843162 | orchestrator | Wednesday 03 September 2025 00:20:02 +0000 (0:00:01.097) 0:00:13.638 *** 2025-09-03 00:20:11.843174 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-03 00:20:11.843185 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-03 00:20:11.843196 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-03 00:20:11.843207 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-03 00:20:11.843217 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-03 00:20:11.843228 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-03 00:20:11.843239 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-03 00:20:11.843250 | orchestrator | 2025-09-03 00:20:11.843261 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-03 00:20:11.843272 | orchestrator | Wednesday 03 September 2025 00:20:07 +0000 (0:00:05.298) 0:00:18.936 *** 2025-09-03 00:20:11.843352 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-03 00:20:11.843366 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-03 00:20:11.843395 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-03 00:20:11.843407 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-03 00:20:11.843418 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-03 00:20:11.843433 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-03 00:20:11.843445 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-03 00:20:11.843455 | orchestrator | 2025-09-03 00:20:11.843487 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-03 00:20:11.843502 | orchestrator | Wednesday 03 September 2025 00:20:07 +0000 (0:00:00.176) 0:00:19.113 *** 2025-09-03 00:20:11.843515 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqeg1f6AmMetiZv8kyaJLSouAcwk2yKJ9GuPZi/l1Am) 2025-09-03 00:20:11.843530 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZfXIRMHArNh5eqXtnq6qoPbwpaAkhj/a+EgHSk1ir+BPd94XXlXiGBO42Dp3EB0gP7PttWw/Gjzmxf7oegn0ZTY9qhkZWc6y04WMl+iExTtwxw45/xvsOSOKI5V58og1YqXC0qrR5NIcJ/XunEkS/Zbb0+rPkeTUeNn9ft4OVahKvV9y1hgZrrOUgATEzrtH41L5tbCifaWOs37kr6/tu94CFVzCrGIt+Jt+EORT+RbYSH0gQueGLILOO6Zf8ruNusXl+ABWgu9IAlDEZERwNE8QPsoeNsQgYrg7H2kJH6S7TLxmkdmlmFWpq4xTUlFd+vc/+9HpZhjrMdk/2Cil+cUXoyODdZ5Xl9Rd9DJDYCJzi9srBOVxsO07NDYn/dPHrVxmowl2rtOBkVz6sM73pUeDWb2DJ7MEDf6uB/V7eMadvBPPrp/t5svLm0qtuVH5ryZkffJNjBry+c0emRXDicdAluIzKcHOyWCBydMox2/yUTSHD48ce+6LCVXn0Vg8=) 2025-09-03 00:20:11.843544 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAjatQP2wwcrFIPZJoRTpHlEoCPeyEjqe7SiUb1/n4AHLXKg4Uz8SCfP6z0iYWhFfTl0YrI9JCar7wESuXFzYOI=) 2025-09-03 00:20:11.843557 | orchestrator | 2025-09-03 00:20:11.843570 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-03 00:20:11.843583 | orchestrator | Wednesday 03 September 2025 00:20:08 +0000 (0:00:01.111) 0:00:20.225 *** 2025-09-03 00:20:11.843596 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMFyGfanxtPkvfV/9SiHDBSBOKvize0uTucToU9la5Xc) 2025-09-03 00:20:11.843611 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0+iBJYVLtkTIT/gbDawgBPK7AkngRp8yl2ZDGXjSLRcUWLNB8PlS9xQH1ZRVnC4L66PopC+TVKLNK0Ll0pdfAW+ZubhmMg/GMSz3UY3uYIESJg6KM74xDvBcltdB/wB04a+Pxfcg5C/Cf5BU1QvKmzSo88+xn4I2C+1/InksAOohwt1340oq+JEH1iKT22SzMiDIPmzgDP9zbDYaZhQuZFWh6b32KR8SO7reISkMEe+677CV0kbzvt2s4v+z4gxiOszpP9oFQn+NdzGn/JrJugzkm9dXBSYzbpInc5giHc8EZYhFIBc+f8bc/cXUOUYLXzFlqSRqucMUsQSbo3yKs3nMXirllq85neHO3Q86pxb2dvO6JkzSoQydgpEDSNR89FvCASD3W9diZZz5MSEPA8jwVVU4o9zgoXatY8l5BmkzZjx90+aIybKHab/BKF9Ho+gVTM+ZfmebDY6izC4c1GhxUzZ4T14cmG7E+ZnY9h9RJVfe9DYoFgKRk0IGS2uE=) 2025-09-03 00:20:11.843624 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC6q/bgSnCZ34IIOztlWHzHFdlNj4uN3OUw7CP/SoKWIVRQx8pNboUhK1L9EZWcub0KL1+3+yskWnWiqxuqaXog=) 2025-09-03 00:20:11.843637 | orchestrator | 2025-09-03 00:20:11.843650 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-03 00:20:11.843662 | orchestrator | Wednesday 03 September 2025 00:20:09 +0000 (0:00:01.060) 0:00:21.285 *** 2025-09-03 00:20:11.843683 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCb8i8w6OF1S0p6FxMbDwXNO3Y4jZ1ahvHci8QW8NeRcEh0e8vfhcxuW5zk5bcUoEI9blZYrLtl8GfnB7fHwIchBVAgyDrwYnoY9rwMzKYguFj4Z9b/uey+P6lZ5mO104PiA+d2PL7amzHZltWxiqteAfj9bO0M5CtWA2VSUZU/5XsMHwEAaiow02iA0e6mLAR1f4UsjkZfJOPLUap+BfVnM8zJMku75OdRbzXH7m9PnLDmAVfJOUKkBp7C8ySQ39I6rAyR8NJmEMz4InOiZo5dCj7OfBYEMXoSO/dlsnFdDo5uYOjiGgirUnUORrsIk8cyszvm/9a1iY2VMMLTxpU499s9IPZleqnEl3rdSWqCE0jWIi5HeWhqDMzXPPJhud22BGM6DA6b3oUZWaN6/bmGo57DmOe7WJQHbxNbD5uFv141LsXbQYMQX1ZbT2TKkfd9Yfh3p5BHfYTtdAARECDKrw5JAkiMPBKUC5kbVc26jEyMaNsX7/NFjBowZQrXDTk=) 2025-09-03 00:20:11.843697 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEHK8tkTRk2DnNQD4zAtlYXP236DRTaXzTF3+Ja9uCtaAXlmR6wk9FN2uRN5A76903j5e9w6gtej72Dp3Uphr5E=) 2025-09-03 00:20:11.843711 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAonTrACFc6dYmzJN8ASUXDW0Qa+En/rpR60SUAZPB/H) 2025-09-03 00:20:11.843724 | orchestrator | 2025-09-03 00:20:11.843737 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-03 00:20:11.843750 | orchestrator | Wednesday 03 September 2025 00:20:10 +0000 (0:00:01.056) 0:00:22.342 *** 2025-09-03 00:20:11.843763 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLAzurBIWu2rHDoIxoV+G/MZht7nldIZFk/sl69Q4Ty4NiQyvYa8jl2cRdD660lqguQwqNNY6jtuUqIrhqa7gLA=) 2025-09-03 00:20:11.843800 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrneAYgCpLr3B+D8c09qdJX3EgDmx0nJKM3qbF5RzhWxfmnl2+UVwSZlp5smz5xF4LA9mi0ObeM6n6Th9k49aKhs0M7kplR1GETpH3vjQgTXDehtycQqTZYMm3E6M1oojRfKd77iyglXlr+fZOt2OaXjdzUW1jY8g0ft+myww8crFqEDKSCH6JVivp4vUd5t3WT9NtEtf1ZxGeOKOUXy+1siZuBkC1pYBNZToQxm0fliuF2IMDbyP+Lvszo1Khtiy8mI+sOaWVAxcm9G9+5Pql2xu3ZyojjL0krTKJSAIbpiEKOmBJXuNYKx1NLVd0iEeM3sJXMNkkn5qVz1AhzF+S+jZv5iGA12iymqM4Yst5ZwLl808fA/RQBFIR6hdGKS2+cHviqpE/2xgmGe6ANA8MRrwd1AA0A70VjQcIqW2RmlFs3S8YLsjNt4EIDoxXAO25G6fdfBPxQhMsraGFoQ1ltk8jjX9tquyTMNbTb8+3PWs8q5xb6HfS9V5b8Rce4Y8=) 2025-09-03 00:20:16.030210 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICrwIxsbfVGjEJ5qvbrBwOlZT7KXX2k+UinyhV5s/Fu5) 2025-09-03 00:20:16.030398 | orchestrator | 2025-09-03 00:20:16.030417 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-03 00:20:16.030430 | orchestrator | Wednesday 03 September 2025 00:20:11 +0000 (0:00:01.032) 0:00:23.375 *** 2025-09-03 00:20:16.030442 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPR0W3EzwSz2xogiMNZambHm8xAniqhuYPN0jGLPgRyz) 2025-09-03 00:20:16.030456 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDb/Gz/SxFiyKmdrCG0zpNuNVqzG5IZK20q7xedFghSmjdccVn4ju0+rXgVzdfhz7zJ4Yj3gtXLV5rg0qwqZemDuT3d3T+tiC+qbKI5G1CAf7Tvdcb6L8kAdAnHD0p+Mdc87OQgik2t+wE94woSqBGm5Gp6jxa5Cwa5Z9ADX9yxyXVuLnKSgjuTrVNJNCquhb520LclZO3nIFvJsDPcxE/UqtTPtXUPPqtbPLRqqn9CjDPFS6glpgWK8TUfU8k2toF0/pWUFDZTvCkL9703iFTtpOEQ7tBMub5iTxYOSe4ElnEbLBhsaVn/QxtjdVfJwRQHzjhAoQNT8ZpNm2RPoH5L3Y5bWBXPFFtZEA9V2AcOsBa9Iggth2gSHjE8ZGgvNmbrSyN6OzIM+hK91pcg14aa9avIag7ZDi3Va/l8OlcwYIpWPnWZI0yOXQsmjEa3W4OEqTVXLuEpQLdZmzaAd6QKE6l24/0RrePCCFUt5MMx509DNYPogLqw0y/6xxV2fW0=) 2025-09-03 00:20:16.030472 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI5LaX+0/ANM86Vp+1z9qUYBZQEqBE0KH5pjM1srcVLJAUMARMYGqF06E8pb0H9fvejr2TyFM/PNkCLLPWRHMBk=) 2025-09-03 00:20:16.030485 | orchestrator | 2025-09-03 00:20:16.030496 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-03 00:20:16.030507 | orchestrator | Wednesday 03 September 2025 00:20:12 +0000 (0:00:01.069) 0:00:24.444 *** 2025-09-03 00:20:16.030518 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDogDWnfCkKQeu3ewtUJSPlUjsQGZrbTnIqiW9ys/cXUzxtOap+mnHFbhdYbIg1MawDuoJLfh2Ul1w7uiyDF+G0=) 2025-09-03 00:20:16.030561 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZebJYY8ccD3aZr2GxXyGSEMA5lk+hqR2cUOwW3ZcyfjvcmIcBR36wN41Qm354ZFoVkA4O0Ef09qqRr+rLB3Sc+bLsC+wrYEltb2RR1ww6seC/4Kwy8tUqGwkps04+0YOAYBFHmf1sNsxtZ/w9VHqIRh5o3jrG9fgtDDElFw/n0DCBQj7fTtRnJsCdHJ5j15LiQecj5hA47Singa8dcg1DpNROPnIqvU0QAS4b2XS1yur7QbiVG5NQPKUgb+MYfI6aCG8XJsmVrSYXrg6aIVEB7Woh5MMKQL+hB55GDD5aORzy/gGrqXXDf/ku2JZliwCaypgF0ly+Z5zIt/4tkSeUbl4B0WdgbY2EQi2hHbNzWC095UZZ3neDh4E2O1iNEWaMTxHq6tH35xZx5BeEUlvqLnCI1M+GCxEdTemw+WL7rNPVG0cVLLcEmaYpsH7/v1iQ86aHVIhzmZGdKm7tGDSELtlqfg8scRdsxfSboww4wVzQ3uDOAozYiWXKAW5fdlk=) 2025-09-03 00:20:16.030574 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILMaeZCSorn2deCWj9JDM+qgjVlK97KuycHGIQR7VB+T) 2025-09-03 00:20:16.030585 | orchestrator | 2025-09-03 00:20:16.030596 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-03 00:20:16.030607 | orchestrator | Wednesday 03 September 2025 00:20:13 +0000 (0:00:01.034) 0:00:25.479 *** 2025-09-03 00:20:16.030618 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClNm/1ViC7NnqYjIOtAMxkhkPq0+felzTK5cCXpip7bsnN73POy04dcfO3QVbVKa/wDNejWUhfXfR5H+1f08oCVXJFjNZIm7YCVnrNCO/6aj4n7k8BvoMHUIjmQhUW8r/oMgBY6AVJfuqk+ZKgpZcTi57N4vtuKeb8mKivw3eHRLU98e5fRJuAMegshvE9B9Ag7fJXMhsZRdl5ZTzBRvcHsBLnS175+nhoqiZen2jzpt3qCWhp3jIBU11h0QdEMbp0bJA4Bryr5PWJLNQ9vkVg78wNs9cj1Y0yijyynW1S+lUB3OIib5AvFPGMZL7FBcfkTPutQVpYfLtsAWHozT3u2krb/AYPu0c3zU26HEy5LGQOgbSMIE7lgwRpMYUSQ4npaNSeShl+Wxjq9flWRea81WZAGb4jj9NwbpSvRGGLmSysQBkcIEAMY8yFKJbqTOnV3myfxN8QDREy7WxNrkfQz5Mt2oca0gcWlBX/15lICO/f7kwfhnf1yFHSHIEPl6U=) 2025-09-03 00:20:16.030630 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHast/EmtDofE5wt8hACpPc6lfX+eruBdP5zzVgyXSNb8FnBFUWEo55Jh9h9rmwHxuABhCRPoqfQxiauFw8TdLE=) 2025-09-03 00:20:16.030641 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGaNfZBQuvTDoVBDQbgvWcSIdQVcn57dCeLh7GK60nUP) 2025-09-03 00:20:16.030652 | orchestrator | 2025-09-03 00:20:16.030663 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-03 00:20:16.030674 | orchestrator | Wednesday 03 September 2025 00:20:15 +0000 (0:00:01.072) 0:00:26.551 *** 2025-09-03 00:20:16.030686 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-03 00:20:16.030697 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-03 00:20:16.030708 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-03 00:20:16.030719 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-03 00:20:16.030730 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-03 00:20:16.030763 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-03 00:20:16.030775 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-03 00:20:16.030786 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:20:16.030798 | orchestrator | 2025-09-03 00:20:16.030809 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-03 00:20:16.030820 | orchestrator | Wednesday 03 September 2025 00:20:15 +0000 (0:00:00.164) 0:00:26.716 *** 2025-09-03 00:20:16.030831 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:20:16.030842 | orchestrator | 2025-09-03 00:20:16.030852 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-03 00:20:16.030863 | orchestrator | Wednesday 03 September 2025 00:20:15 +0000 (0:00:00.068) 0:00:26.785 *** 2025-09-03 00:20:16.030874 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:20:16.030885 | orchestrator | 2025-09-03 00:20:16.030895 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-03 00:20:16.030906 | orchestrator | Wednesday 03 September 2025 00:20:15 +0000 (0:00:00.053) 0:00:26.838 *** 2025-09-03 00:20:16.030924 | orchestrator | changed: [testbed-manager] 2025-09-03 00:20:16.030935 | orchestrator | 2025-09-03 00:20:16.030946 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:20:16.030958 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-03 00:20:16.030970 | orchestrator | 2025-09-03 00:20:16.030981 | orchestrator | 2025-09-03 00:20:16.030992 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:20:16.031002 | orchestrator | Wednesday 03 September 2025 00:20:15 +0000 (0:00:00.498) 0:00:27.336 *** 2025-09-03 00:20:16.031013 | orchestrator | =============================================================================== 2025-09-03 00:20:16.031024 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.85s 2025-09-03 00:20:16.031035 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.30s 2025-09-03 00:20:16.031047 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-09-03 00:20:16.031058 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-03 00:20:16.031069 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-09-03 00:20:16.031079 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-03 00:20:16.031112 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-03 00:20:16.031124 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-03 00:20:16.031135 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-03 00:20:16.031146 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-03 00:20:16.031157 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-03 00:20:16.031168 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-03 00:20:16.031178 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-03 00:20:16.031189 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-03 00:20:16.031200 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-03 00:20:16.031211 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-09-03 00:20:16.031222 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.50s 2025-09-03 00:20:16.031232 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2025-09-03 00:20:16.031244 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-09-03 00:20:16.031254 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-09-03 00:20:16.296565 | orchestrator | + osism apply squid 2025-09-03 00:20:28.385860 | orchestrator | 2025-09-03 00:20:28 | INFO  | Task 36f3d9f3-5d81-4c0b-9d86-922605bddf24 (squid) was prepared for execution. 2025-09-03 00:20:28.386006 | orchestrator | 2025-09-03 00:20:28 | INFO  | It takes a moment until task 36f3d9f3-5d81-4c0b-9d86-922605bddf24 (squid) has been started and output is visible here. 2025-09-03 00:22:21.712001 | orchestrator | 2025-09-03 00:22:21.712135 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-03 00:22:21.712152 | orchestrator | 2025-09-03 00:22:21.712165 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-03 00:22:21.712177 | orchestrator | Wednesday 03 September 2025 00:20:31 +0000 (0:00:00.122) 0:00:00.122 *** 2025-09-03 00:22:21.712207 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-03 00:22:21.712220 | orchestrator | 2025-09-03 00:22:21.712231 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-03 00:22:21.712269 | orchestrator | Wednesday 03 September 2025 00:20:31 +0000 (0:00:00.098) 0:00:00.220 *** 2025-09-03 00:22:21.712281 | orchestrator | ok: [testbed-manager] 2025-09-03 00:22:21.712293 | orchestrator | 2025-09-03 00:22:21.712304 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-03 00:22:21.712370 | orchestrator | Wednesday 03 September 2025 00:20:33 +0000 (0:00:01.109) 0:00:01.329 *** 2025-09-03 00:22:21.712382 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-03 00:22:21.712393 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-03 00:22:21.712404 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-03 00:22:21.712415 | orchestrator | 2025-09-03 00:22:21.712426 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-03 00:22:21.712437 | orchestrator | Wednesday 03 September 2025 00:20:34 +0000 (0:00:01.013) 0:00:02.343 *** 2025-09-03 00:22:21.712448 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-03 00:22:21.712459 | orchestrator | 2025-09-03 00:22:21.712470 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-03 00:22:21.712481 | orchestrator | Wednesday 03 September 2025 00:20:35 +0000 (0:00:00.937) 0:00:03.280 *** 2025-09-03 00:22:21.712492 | orchestrator | ok: [testbed-manager] 2025-09-03 00:22:21.712503 | orchestrator | 2025-09-03 00:22:21.712514 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-03 00:22:21.712525 | orchestrator | Wednesday 03 September 2025 00:20:35 +0000 (0:00:00.308) 0:00:03.589 *** 2025-09-03 00:22:21.712538 | orchestrator | changed: [testbed-manager] 2025-09-03 00:22:21.712551 | orchestrator | 2025-09-03 00:22:21.712565 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-03 00:22:21.712577 | orchestrator | Wednesday 03 September 2025 00:20:36 +0000 (0:00:00.770) 0:00:04.360 *** 2025-09-03 00:22:21.712590 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-03 00:22:21.712603 | orchestrator | ok: [testbed-manager] 2025-09-03 00:22:21.712616 | orchestrator | 2025-09-03 00:22:21.712628 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-03 00:22:21.712641 | orchestrator | Wednesday 03 September 2025 00:21:08 +0000 (0:00:32.548) 0:00:36.908 *** 2025-09-03 00:22:21.712654 | orchestrator | changed: [testbed-manager] 2025-09-03 00:22:21.712666 | orchestrator | 2025-09-03 00:22:21.712679 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-03 00:22:21.712691 | orchestrator | Wednesday 03 September 2025 00:21:20 +0000 (0:00:12.050) 0:00:48.958 *** 2025-09-03 00:22:21.712705 | orchestrator | Pausing for 60 seconds 2025-09-03 00:22:21.712718 | orchestrator | changed: [testbed-manager] 2025-09-03 00:22:21.712731 | orchestrator | 2025-09-03 00:22:21.712744 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-03 00:22:21.712757 | orchestrator | Wednesday 03 September 2025 00:22:20 +0000 (0:01:00.083) 0:01:49.042 *** 2025-09-03 00:22:21.712769 | orchestrator | ok: [testbed-manager] 2025-09-03 00:22:21.712783 | orchestrator | 2025-09-03 00:22:21.712795 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-03 00:22:21.712808 | orchestrator | Wednesday 03 September 2025 00:22:20 +0000 (0:00:00.091) 0:01:49.134 *** 2025-09-03 00:22:21.712820 | orchestrator | changed: [testbed-manager] 2025-09-03 00:22:21.712832 | orchestrator | 2025-09-03 00:22:21.712845 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:22:21.712858 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:22:21.712871 | orchestrator | 2025-09-03 00:22:21.712885 | orchestrator | 2025-09-03 00:22:21.712896 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:22:21.712907 | orchestrator | Wednesday 03 September 2025 00:22:21 +0000 (0:00:00.600) 0:01:49.734 *** 2025-09-03 00:22:21.712926 | orchestrator | =============================================================================== 2025-09-03 00:22:21.712937 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-09-03 00:22:21.712948 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.55s 2025-09-03 00:22:21.712959 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.05s 2025-09-03 00:22:21.712970 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.11s 2025-09-03 00:22:21.712981 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.01s 2025-09-03 00:22:21.712991 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.94s 2025-09-03 00:22:21.713002 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.77s 2025-09-03 00:22:21.713013 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2025-09-03 00:22:21.713024 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.31s 2025-09-03 00:22:21.713035 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-09-03 00:22:21.713046 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.09s 2025-09-03 00:22:21.974354 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-03 00:22:21.974442 | orchestrator | ++ semver latest 9.0.0 2025-09-03 00:22:22.017113 | orchestrator | + [[ -1 -lt 0 ]] 2025-09-03 00:22:22.017166 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-03 00:22:22.017424 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-03 00:22:33.947287 | orchestrator | 2025-09-03 00:22:33 | INFO  | Task 9b63b9aa-eddf-414e-822e-12cada7bfbc8 (operator) was prepared for execution. 2025-09-03 00:22:33.947452 | orchestrator | 2025-09-03 00:22:33 | INFO  | It takes a moment until task 9b63b9aa-eddf-414e-822e-12cada7bfbc8 (operator) has been started and output is visible here. 2025-09-03 00:22:50.724830 | orchestrator | 2025-09-03 00:22:50.724939 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-03 00:22:50.724957 | orchestrator | 2025-09-03 00:22:50.724969 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-03 00:22:50.724981 | orchestrator | Wednesday 03 September 2025 00:22:37 +0000 (0:00:00.147) 0:00:00.147 *** 2025-09-03 00:22:50.724993 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:22:50.725006 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:22:50.725017 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:22:50.725028 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:22:50.725038 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:22:50.725049 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:22:50.725060 | orchestrator | 2025-09-03 00:22:50.725072 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-03 00:22:50.725083 | orchestrator | Wednesday 03 September 2025 00:22:42 +0000 (0:00:04.643) 0:00:04.791 *** 2025-09-03 00:22:50.725109 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:22:50.725120 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:22:50.725132 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:22:50.725143 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:22:50.725154 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:22:50.725165 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:22:50.725176 | orchestrator | 2025-09-03 00:22:50.725187 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-03 00:22:50.725198 | orchestrator | 2025-09-03 00:22:50.725209 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-03 00:22:50.725220 | orchestrator | Wednesday 03 September 2025 00:22:43 +0000 (0:00:00.749) 0:00:05.541 *** 2025-09-03 00:22:50.725231 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:22:50.725242 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:22:50.725253 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:22:50.725264 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:22:50.725275 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:22:50.725286 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:22:50.725346 | orchestrator | 2025-09-03 00:22:50.725359 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-03 00:22:50.725372 | orchestrator | Wednesday 03 September 2025 00:22:43 +0000 (0:00:00.154) 0:00:05.696 *** 2025-09-03 00:22:50.725384 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:22:50.725396 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:22:50.725408 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:22:50.725421 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:22:50.725434 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:22:50.725446 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:22:50.725459 | orchestrator | 2025-09-03 00:22:50.725472 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-03 00:22:50.725484 | orchestrator | Wednesday 03 September 2025 00:22:43 +0000 (0:00:00.199) 0:00:05.895 *** 2025-09-03 00:22:50.725497 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:22:50.725511 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:22:50.725523 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:22:50.725535 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:22:50.725547 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:22:50.725561 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:22:50.725574 | orchestrator | 2025-09-03 00:22:50.725588 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-03 00:22:50.725600 | orchestrator | Wednesday 03 September 2025 00:22:44 +0000 (0:00:00.600) 0:00:06.496 *** 2025-09-03 00:22:50.725612 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:22:50.725625 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:22:50.725637 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:22:50.725650 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:22:50.725662 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:22:50.725674 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:22:50.725686 | orchestrator | 2025-09-03 00:22:50.725699 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-03 00:22:50.725713 | orchestrator | Wednesday 03 September 2025 00:22:44 +0000 (0:00:00.795) 0:00:07.291 *** 2025-09-03 00:22:50.725725 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-03 00:22:50.725736 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-03 00:22:50.725747 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-03 00:22:50.725758 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-03 00:22:50.725769 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-03 00:22:50.725780 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-03 00:22:50.725791 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-03 00:22:50.725802 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-03 00:22:50.725813 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-03 00:22:50.725823 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-03 00:22:50.725834 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-03 00:22:50.725845 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-03 00:22:50.725856 | orchestrator | 2025-09-03 00:22:50.725867 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-03 00:22:50.725878 | orchestrator | Wednesday 03 September 2025 00:22:46 +0000 (0:00:01.187) 0:00:08.479 *** 2025-09-03 00:22:50.725889 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:22:50.725900 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:22:50.725910 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:22:50.725921 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:22:50.725931 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:22:50.725942 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:22:50.725953 | orchestrator | 2025-09-03 00:22:50.725964 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-03 00:22:50.725976 | orchestrator | Wednesday 03 September 2025 00:22:47 +0000 (0:00:01.216) 0:00:09.695 *** 2025-09-03 00:22:50.725986 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-03 00:22:50.726006 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-03 00:22:50.726071 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-03 00:22:50.726084 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-03 00:22:50.726113 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-03 00:22:50.726126 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-03 00:22:50.726136 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-03 00:22:50.726147 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-03 00:22:50.726158 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-03 00:22:50.726169 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-03 00:22:50.726179 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-03 00:22:50.726190 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-03 00:22:50.726201 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-03 00:22:50.726211 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-03 00:22:50.726222 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-03 00:22:50.726233 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-03 00:22:50.726244 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-03 00:22:50.726255 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-03 00:22:50.726265 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-03 00:22:50.726276 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-03 00:22:50.726287 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-03 00:22:50.726297 | orchestrator | 2025-09-03 00:22:50.726308 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-03 00:22:50.726340 | orchestrator | Wednesday 03 September 2025 00:22:48 +0000 (0:00:01.256) 0:00:10.952 *** 2025-09-03 00:22:50.726350 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:22:50.726362 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:22:50.726372 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:22:50.726383 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:22:50.726394 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:22:50.726405 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:22:50.726415 | orchestrator | 2025-09-03 00:22:50.726426 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-03 00:22:50.726437 | orchestrator | Wednesday 03 September 2025 00:22:48 +0000 (0:00:00.146) 0:00:11.098 *** 2025-09-03 00:22:50.726448 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:22:50.726459 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:22:50.726469 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:22:50.726480 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:22:50.726490 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:22:50.726501 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:22:50.726512 | orchestrator | 2025-09-03 00:22:50.726523 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-03 00:22:50.726534 | orchestrator | Wednesday 03 September 2025 00:22:49 +0000 (0:00:00.557) 0:00:11.656 *** 2025-09-03 00:22:50.726544 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:22:50.726555 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:22:50.726566 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:22:50.726577 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:22:50.726587 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:22:50.726598 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:22:50.726609 | orchestrator | 2025-09-03 00:22:50.726628 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-03 00:22:50.726639 | orchestrator | Wednesday 03 September 2025 00:22:49 +0000 (0:00:00.200) 0:00:11.856 *** 2025-09-03 00:22:50.726650 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-03 00:22:50.726665 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:22:50.726676 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-03 00:22:50.726687 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-03 00:22:50.726698 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:22:50.726709 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:22:50.726719 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-03 00:22:50.726730 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:22:50.726741 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-03 00:22:50.726752 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:22:50.726763 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-03 00:22:50.726773 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:22:50.726784 | orchestrator | 2025-09-03 00:22:50.726795 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-03 00:22:50.726806 | orchestrator | Wednesday 03 September 2025 00:22:50 +0000 (0:00:00.740) 0:00:12.596 *** 2025-09-03 00:22:50.726817 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:22:50.726828 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:22:50.726838 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:22:50.726849 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:22:50.726860 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:22:50.726870 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:22:50.726881 | orchestrator | 2025-09-03 00:22:50.726892 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-03 00:22:50.726903 | orchestrator | Wednesday 03 September 2025 00:22:50 +0000 (0:00:00.141) 0:00:12.738 *** 2025-09-03 00:22:50.726913 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:22:50.726924 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:22:50.726935 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:22:50.726945 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:22:50.726956 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:22:50.726967 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:22:50.726977 | orchestrator | 2025-09-03 00:22:50.726988 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-03 00:22:50.727006 | orchestrator | Wednesday 03 September 2025 00:22:50 +0000 (0:00:00.162) 0:00:12.901 *** 2025-09-03 00:22:50.727022 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:22:50.727033 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:22:50.727044 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:22:50.727054 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:22:50.727072 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:22:51.848993 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:22:51.849104 | orchestrator | 2025-09-03 00:22:51.849121 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-03 00:22:51.849135 | orchestrator | Wednesday 03 September 2025 00:22:50 +0000 (0:00:00.128) 0:00:13.029 *** 2025-09-03 00:22:51.849146 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:22:51.849158 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:22:51.849169 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:22:51.849179 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:22:51.849190 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:22:51.849201 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:22:51.849212 | orchestrator | 2025-09-03 00:22:51.849224 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-03 00:22:51.849235 | orchestrator | Wednesday 03 September 2025 00:22:51 +0000 (0:00:00.716) 0:00:13.745 *** 2025-09-03 00:22:51.849245 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:22:51.849256 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:22:51.849266 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:22:51.849304 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:22:51.849368 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:22:51.849381 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:22:51.849392 | orchestrator | 2025-09-03 00:22:51.849403 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:22:51.849415 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-03 00:22:51.849428 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-03 00:22:51.849439 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-03 00:22:51.849450 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-03 00:22:51.849461 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-03 00:22:51.849472 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-03 00:22:51.849483 | orchestrator | 2025-09-03 00:22:51.849494 | orchestrator | 2025-09-03 00:22:51.849505 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:22:51.849516 | orchestrator | Wednesday 03 September 2025 00:22:51 +0000 (0:00:00.201) 0:00:13.947 *** 2025-09-03 00:22:51.849527 | orchestrator | =============================================================================== 2025-09-03 00:22:51.849540 | orchestrator | Gathering Facts --------------------------------------------------------- 4.64s 2025-09-03 00:22:51.849552 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.26s 2025-09-03 00:22:51.849566 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.22s 2025-09-03 00:22:51.849578 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.19s 2025-09-03 00:22:51.849591 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.80s 2025-09-03 00:22:51.849603 | orchestrator | Do not require tty for all users ---------------------------------------- 0.75s 2025-09-03 00:22:51.849616 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.74s 2025-09-03 00:22:51.849628 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.72s 2025-09-03 00:22:51.849641 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2025-09-03 00:22:51.849654 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.56s 2025-09-03 00:22:51.849666 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.20s 2025-09-03 00:22:51.849680 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2025-09-03 00:22:51.849692 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.20s 2025-09-03 00:22:51.849705 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2025-09-03 00:22:51.849717 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.15s 2025-09-03 00:22:51.849730 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2025-09-03 00:22:51.849743 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-09-03 00:22:51.849755 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.13s 2025-09-03 00:22:52.107628 | orchestrator | + osism apply --environment custom facts 2025-09-03 00:22:53.936118 | orchestrator | 2025-09-03 00:22:53 | INFO  | Trying to run play facts in environment custom 2025-09-03 00:23:04.174857 | orchestrator | 2025-09-03 00:23:04 | INFO  | Task b1967dae-02ff-451b-9100-feb68030b069 (facts) was prepared for execution. 2025-09-03 00:23:04.174996 | orchestrator | 2025-09-03 00:23:04 | INFO  | It takes a moment until task b1967dae-02ff-451b-9100-feb68030b069 (facts) has been started and output is visible here. 2025-09-03 00:23:47.343711 | orchestrator | 2025-09-03 00:23:47.343836 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-03 00:23:47.343854 | orchestrator | 2025-09-03 00:23:47.343867 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-03 00:23:47.343879 | orchestrator | Wednesday 03 September 2025 00:23:07 +0000 (0:00:00.062) 0:00:00.062 *** 2025-09-03 00:23:47.343890 | orchestrator | ok: [testbed-manager] 2025-09-03 00:23:47.343902 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:23:47.343914 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:23:47.343925 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:23:47.343935 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:23:47.343946 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:23:47.343957 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:23:47.343967 | orchestrator | 2025-09-03 00:23:47.343978 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-03 00:23:47.343989 | orchestrator | Wednesday 03 September 2025 00:23:09 +0000 (0:00:01.364) 0:00:01.427 *** 2025-09-03 00:23:47.344000 | orchestrator | ok: [testbed-manager] 2025-09-03 00:23:47.344011 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:23:47.344022 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:23:47.344032 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:23:47.344043 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:23:47.344054 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:23:47.344064 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:23:47.344075 | orchestrator | 2025-09-03 00:23:47.344086 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-03 00:23:47.344097 | orchestrator | 2025-09-03 00:23:47.344107 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-03 00:23:47.344118 | orchestrator | Wednesday 03 September 2025 00:23:10 +0000 (0:00:01.178) 0:00:02.605 *** 2025-09-03 00:23:47.344129 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:23:47.344140 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:23:47.344151 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:23:47.344161 | orchestrator | 2025-09-03 00:23:47.344172 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-03 00:23:47.344184 | orchestrator | Wednesday 03 September 2025 00:23:10 +0000 (0:00:00.093) 0:00:02.699 *** 2025-09-03 00:23:47.344195 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:23:47.344205 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:23:47.344216 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:23:47.344227 | orchestrator | 2025-09-03 00:23:47.344240 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-03 00:23:47.344253 | orchestrator | Wednesday 03 September 2025 00:23:10 +0000 (0:00:00.189) 0:00:02.888 *** 2025-09-03 00:23:47.344266 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:23:47.344279 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:23:47.344292 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:23:47.344304 | orchestrator | 2025-09-03 00:23:47.344317 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-03 00:23:47.344365 | orchestrator | Wednesday 03 September 2025 00:23:10 +0000 (0:00:00.168) 0:00:03.057 *** 2025-09-03 00:23:47.344381 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:23:47.344396 | orchestrator | 2025-09-03 00:23:47.344409 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-03 00:23:47.344422 | orchestrator | Wednesday 03 September 2025 00:23:10 +0000 (0:00:00.110) 0:00:03.168 *** 2025-09-03 00:23:47.344464 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:23:47.344478 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:23:47.344491 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:23:47.344503 | orchestrator | 2025-09-03 00:23:47.344515 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-03 00:23:47.344528 | orchestrator | Wednesday 03 September 2025 00:23:11 +0000 (0:00:00.410) 0:00:03.578 *** 2025-09-03 00:23:47.344541 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:23:47.344553 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:23:47.344566 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:23:47.344578 | orchestrator | 2025-09-03 00:23:47.344592 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-03 00:23:47.344605 | orchestrator | Wednesday 03 September 2025 00:23:11 +0000 (0:00:00.096) 0:00:03.674 *** 2025-09-03 00:23:47.344616 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:23:47.344627 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:23:47.344638 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:23:47.344648 | orchestrator | 2025-09-03 00:23:47.344659 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-03 00:23:47.344670 | orchestrator | Wednesday 03 September 2025 00:23:12 +0000 (0:00:01.019) 0:00:04.694 *** 2025-09-03 00:23:47.344681 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:23:47.344691 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:23:47.344702 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:23:47.344713 | orchestrator | 2025-09-03 00:23:47.344724 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-03 00:23:47.344735 | orchestrator | Wednesday 03 September 2025 00:23:12 +0000 (0:00:00.460) 0:00:05.154 *** 2025-09-03 00:23:47.344746 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:23:47.344757 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:23:47.344768 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:23:47.344779 | orchestrator | 2025-09-03 00:23:47.344789 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-03 00:23:47.344800 | orchestrator | Wednesday 03 September 2025 00:23:13 +0000 (0:00:01.022) 0:00:06.176 *** 2025-09-03 00:23:47.344811 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:23:47.344822 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:23:47.344833 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:23:47.344843 | orchestrator | 2025-09-03 00:23:47.344854 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-03 00:23:47.344865 | orchestrator | Wednesday 03 September 2025 00:23:30 +0000 (0:00:17.109) 0:00:23.286 *** 2025-09-03 00:23:47.344876 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:23:47.344887 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:23:47.344898 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:23:47.344908 | orchestrator | 2025-09-03 00:23:47.344938 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-03 00:23:47.344967 | orchestrator | Wednesday 03 September 2025 00:23:31 +0000 (0:00:00.100) 0:00:23.386 *** 2025-09-03 00:23:47.344979 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:23:47.344990 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:23:47.345000 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:23:47.345011 | orchestrator | 2025-09-03 00:23:47.345022 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-03 00:23:47.345033 | orchestrator | Wednesday 03 September 2025 00:23:38 +0000 (0:00:07.329) 0:00:30.716 *** 2025-09-03 00:23:47.345044 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:23:47.345055 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:23:47.345066 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:23:47.345076 | orchestrator | 2025-09-03 00:23:47.345087 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-03 00:23:47.345098 | orchestrator | Wednesday 03 September 2025 00:23:38 +0000 (0:00:00.424) 0:00:31.141 *** 2025-09-03 00:23:47.345109 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-03 00:23:47.345128 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-03 00:23:47.345138 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-03 00:23:47.345149 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-03 00:23:47.345160 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-03 00:23:47.345171 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-03 00:23:47.345181 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-03 00:23:47.345192 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-03 00:23:47.345203 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-03 00:23:47.345213 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-03 00:23:47.345224 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-03 00:23:47.345235 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-03 00:23:47.345246 | orchestrator | 2025-09-03 00:23:47.345256 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-03 00:23:47.345267 | orchestrator | Wednesday 03 September 2025 00:23:42 +0000 (0:00:03.503) 0:00:34.644 *** 2025-09-03 00:23:47.345278 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:23:47.345289 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:23:47.345299 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:23:47.345310 | orchestrator | 2025-09-03 00:23:47.345321 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-03 00:23:47.345349 | orchestrator | 2025-09-03 00:23:47.345360 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-03 00:23:47.345371 | orchestrator | Wednesday 03 September 2025 00:23:43 +0000 (0:00:01.197) 0:00:35.842 *** 2025-09-03 00:23:47.345382 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:23:47.345393 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:23:47.345404 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:23:47.345414 | orchestrator | ok: [testbed-manager] 2025-09-03 00:23:47.345425 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:23:47.345436 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:23:47.345446 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:23:47.345457 | orchestrator | 2025-09-03 00:23:47.345468 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:23:47.345480 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:23:47.345491 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:23:47.345504 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:23:47.345515 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:23:47.345526 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:23:47.345537 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:23:47.345548 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:23:47.345559 | orchestrator | 2025-09-03 00:23:47.345570 | orchestrator | 2025-09-03 00:23:47.345581 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:23:47.345591 | orchestrator | Wednesday 03 September 2025 00:23:47 +0000 (0:00:03.794) 0:00:39.636 *** 2025-09-03 00:23:47.345602 | orchestrator | =============================================================================== 2025-09-03 00:23:47.345620 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.11s 2025-09-03 00:23:47.345631 | orchestrator | Install required packages (Debian) -------------------------------------- 7.33s 2025-09-03 00:23:47.345641 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.79s 2025-09-03 00:23:47.345652 | orchestrator | Copy fact files --------------------------------------------------------- 3.50s 2025-09-03 00:23:47.345668 | orchestrator | Create custom facts directory ------------------------------------------- 1.36s 2025-09-03 00:23:47.345679 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.20s 2025-09-03 00:23:47.345696 | orchestrator | Copy fact file ---------------------------------------------------------- 1.18s 2025-09-03 00:23:47.536995 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.02s 2025-09-03 00:23:47.537088 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.02s 2025-09-03 00:23:47.537101 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2025-09-03 00:23:47.537112 | orchestrator | Create custom facts directory ------------------------------------------- 0.42s 2025-09-03 00:23:47.537132 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.41s 2025-09-03 00:23:47.537152 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2025-09-03 00:23:47.537172 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.17s 2025-09-03 00:23:47.537190 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.11s 2025-09-03 00:23:47.537210 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-09-03 00:23:47.537227 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2025-09-03 00:23:47.537244 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2025-09-03 00:23:47.792392 | orchestrator | + osism apply bootstrap 2025-09-03 00:23:59.785904 | orchestrator | 2025-09-03 00:23:59 | INFO  | Task f3f1ace7-2340-4b5a-bc62-e914c15a5bac (bootstrap) was prepared for execution. 2025-09-03 00:23:59.786073 | orchestrator | 2025-09-03 00:23:59 | INFO  | It takes a moment until task f3f1ace7-2340-4b5a-bc62-e914c15a5bac (bootstrap) has been started and output is visible here. 2025-09-03 00:24:14.871217 | orchestrator | 2025-09-03 00:24:14.871392 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-03 00:24:14.871413 | orchestrator | 2025-09-03 00:24:14.871426 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-03 00:24:14.871438 | orchestrator | Wednesday 03 September 2025 00:24:03 +0000 (0:00:00.134) 0:00:00.134 *** 2025-09-03 00:24:14.871449 | orchestrator | ok: [testbed-manager] 2025-09-03 00:24:14.871462 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:24:14.871474 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:24:14.871485 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:24:14.871496 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:24:14.871507 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:24:14.871517 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:24:14.871528 | orchestrator | 2025-09-03 00:24:14.871539 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-03 00:24:14.871550 | orchestrator | 2025-09-03 00:24:14.871562 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-03 00:24:14.871572 | orchestrator | Wednesday 03 September 2025 00:24:03 +0000 (0:00:00.188) 0:00:00.322 *** 2025-09-03 00:24:14.871583 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:24:14.871594 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:24:14.871605 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:24:14.871616 | orchestrator | ok: [testbed-manager] 2025-09-03 00:24:14.871627 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:24:14.871637 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:24:14.871648 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:24:14.871686 | orchestrator | 2025-09-03 00:24:14.871698 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-03 00:24:14.871709 | orchestrator | 2025-09-03 00:24:14.871720 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-03 00:24:14.871731 | orchestrator | Wednesday 03 September 2025 00:24:07 +0000 (0:00:03.649) 0:00:03.972 *** 2025-09-03 00:24:14.871742 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-03 00:24:14.871753 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-03 00:24:14.871764 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-03 00:24:14.871775 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-03 00:24:14.871785 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-03 00:24:14.871796 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-03 00:24:14.871807 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-03 00:24:14.871817 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-03 00:24:14.871828 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-03 00:24:14.871839 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-03 00:24:14.871849 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-03 00:24:14.871861 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-03 00:24:14.871872 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-03 00:24:14.871883 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-03 00:24:14.871893 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-03 00:24:14.871904 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-03 00:24:14.871915 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-03 00:24:14.871925 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-03 00:24:14.871936 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-03 00:24:14.871947 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:24:14.871958 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-03 00:24:14.871969 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-03 00:24:14.871979 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:24:14.871990 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-03 00:24:14.872001 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-03 00:24:14.872012 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-03 00:24:14.872022 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-03 00:24:14.872033 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-03 00:24:14.872044 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-03 00:24:14.872054 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-03 00:24:14.872065 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-03 00:24:14.872076 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-03 00:24:14.872086 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-03 00:24:14.872097 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-03 00:24:14.872107 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-03 00:24:14.872118 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-03 00:24:14.872129 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:24:14.872139 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-03 00:24:14.872150 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-03 00:24:14.872161 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:24:14.872171 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-03 00:24:14.872191 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-03 00:24:14.872202 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-03 00:24:14.872213 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-03 00:24:14.872242 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-03 00:24:14.872253 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-03 00:24:14.872281 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-03 00:24:14.872293 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-03 00:24:14.872304 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-03 00:24:14.872314 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:24:14.872325 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-03 00:24:14.872358 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-03 00:24:14.872369 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:24:14.872380 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-03 00:24:14.872390 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-03 00:24:14.872401 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:24:14.872412 | orchestrator | 2025-09-03 00:24:14.872423 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-03 00:24:14.872434 | orchestrator | 2025-09-03 00:24:14.872445 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-03 00:24:14.872455 | orchestrator | Wednesday 03 September 2025 00:24:07 +0000 (0:00:00.365) 0:00:04.337 *** 2025-09-03 00:24:14.872466 | orchestrator | ok: [testbed-manager] 2025-09-03 00:24:14.872477 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:24:14.872488 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:24:14.872498 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:24:14.872509 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:24:14.872520 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:24:14.872530 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:24:14.872541 | orchestrator | 2025-09-03 00:24:14.872552 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-03 00:24:14.872563 | orchestrator | Wednesday 03 September 2025 00:24:09 +0000 (0:00:01.159) 0:00:05.497 *** 2025-09-03 00:24:14.872573 | orchestrator | ok: [testbed-manager] 2025-09-03 00:24:14.872584 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:24:14.872595 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:24:14.872605 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:24:14.872616 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:24:14.872627 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:24:14.872637 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:24:14.872648 | orchestrator | 2025-09-03 00:24:14.872659 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-03 00:24:14.872670 | orchestrator | Wednesday 03 September 2025 00:24:10 +0000 (0:00:01.168) 0:00:06.665 *** 2025-09-03 00:24:14.872681 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:24:14.872695 | orchestrator | 2025-09-03 00:24:14.872706 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-03 00:24:14.872717 | orchestrator | Wednesday 03 September 2025 00:24:10 +0000 (0:00:00.268) 0:00:06.934 *** 2025-09-03 00:24:14.872727 | orchestrator | changed: [testbed-manager] 2025-09-03 00:24:14.872738 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:24:14.872749 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:24:14.872760 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:24:14.872771 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:24:14.872781 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:24:14.872792 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:24:14.872803 | orchestrator | 2025-09-03 00:24:14.872821 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-03 00:24:14.872832 | orchestrator | Wednesday 03 September 2025 00:24:12 +0000 (0:00:01.942) 0:00:08.877 *** 2025-09-03 00:24:14.872843 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:24:14.872856 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:24:14.872869 | orchestrator | 2025-09-03 00:24:14.872885 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-03 00:24:14.872896 | orchestrator | Wednesday 03 September 2025 00:24:12 +0000 (0:00:00.278) 0:00:09.155 *** 2025-09-03 00:24:14.872907 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:24:14.872917 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:24:14.872928 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:24:14.872939 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:24:14.872949 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:24:14.872960 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:24:14.872970 | orchestrator | 2025-09-03 00:24:14.872981 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-03 00:24:14.872992 | orchestrator | Wednesday 03 September 2025 00:24:13 +0000 (0:00:01.013) 0:00:10.169 *** 2025-09-03 00:24:14.873003 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:24:14.873014 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:24:14.873024 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:24:14.873035 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:24:14.873045 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:24:14.873056 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:24:14.873067 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:24:14.873077 | orchestrator | 2025-09-03 00:24:14.873088 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-03 00:24:14.873099 | orchestrator | Wednesday 03 September 2025 00:24:14 +0000 (0:00:00.596) 0:00:10.765 *** 2025-09-03 00:24:14.873109 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:24:14.873120 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:24:14.873131 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:24:14.873141 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:24:14.873152 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:24:14.873162 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:24:14.873173 | orchestrator | ok: [testbed-manager] 2025-09-03 00:24:14.873184 | orchestrator | 2025-09-03 00:24:14.873195 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-03 00:24:14.873206 | orchestrator | Wednesday 03 September 2025 00:24:14 +0000 (0:00:00.411) 0:00:11.177 *** 2025-09-03 00:24:14.873217 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:24:14.873228 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:24:14.873245 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:24:27.003021 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:24:27.003141 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:24:27.003156 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:24:27.003168 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:24:27.003180 | orchestrator | 2025-09-03 00:24:27.003193 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-03 00:24:27.003207 | orchestrator | Wednesday 03 September 2025 00:24:14 +0000 (0:00:00.226) 0:00:11.403 *** 2025-09-03 00:24:27.003220 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:24:27.003252 | orchestrator | 2025-09-03 00:24:27.003264 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-03 00:24:27.003276 | orchestrator | Wednesday 03 September 2025 00:24:15 +0000 (0:00:00.283) 0:00:11.687 *** 2025-09-03 00:24:27.003328 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:24:27.003393 | orchestrator | 2025-09-03 00:24:27.003406 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-03 00:24:27.003417 | orchestrator | Wednesday 03 September 2025 00:24:15 +0000 (0:00:00.307) 0:00:11.994 *** 2025-09-03 00:24:27.003428 | orchestrator | ok: [testbed-manager] 2025-09-03 00:24:27.003441 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:24:27.003452 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:24:27.003463 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:24:27.003474 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:24:27.003485 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:24:27.003496 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:24:27.003507 | orchestrator | 2025-09-03 00:24:27.003518 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-03 00:24:27.003529 | orchestrator | Wednesday 03 September 2025 00:24:16 +0000 (0:00:01.316) 0:00:13.311 *** 2025-09-03 00:24:27.003540 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:24:27.003552 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:24:27.003567 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:24:27.003579 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:24:27.003593 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:24:27.003606 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:24:27.003618 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:24:27.003631 | orchestrator | 2025-09-03 00:24:27.003645 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-03 00:24:27.003659 | orchestrator | Wednesday 03 September 2025 00:24:17 +0000 (0:00:00.236) 0:00:13.547 *** 2025-09-03 00:24:27.003671 | orchestrator | ok: [testbed-manager] 2025-09-03 00:24:27.003685 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:24:27.003698 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:24:27.003711 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:24:27.003724 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:24:27.003737 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:24:27.003749 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:24:27.003761 | orchestrator | 2025-09-03 00:24:27.003774 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-03 00:24:27.003787 | orchestrator | Wednesday 03 September 2025 00:24:17 +0000 (0:00:00.530) 0:00:14.077 *** 2025-09-03 00:24:27.003800 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:24:27.003813 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:24:27.003826 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:24:27.003840 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:24:27.003851 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:24:27.003862 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:24:27.003873 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:24:27.003883 | orchestrator | 2025-09-03 00:24:27.003895 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-03 00:24:27.003907 | orchestrator | Wednesday 03 September 2025 00:24:17 +0000 (0:00:00.297) 0:00:14.375 *** 2025-09-03 00:24:27.003918 | orchestrator | ok: [testbed-manager] 2025-09-03 00:24:27.003929 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:24:27.003940 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:24:27.003951 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:24:27.003962 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:24:27.003973 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:24:27.003984 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:24:27.003995 | orchestrator | 2025-09-03 00:24:27.004006 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-03 00:24:27.004017 | orchestrator | Wednesday 03 September 2025 00:24:18 +0000 (0:00:00.587) 0:00:14.963 *** 2025-09-03 00:24:27.004036 | orchestrator | ok: [testbed-manager] 2025-09-03 00:24:27.004047 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:24:27.004057 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:24:27.004068 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:24:27.004079 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:24:27.004090 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:24:27.004100 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:24:27.004111 | orchestrator | 2025-09-03 00:24:27.004122 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-03 00:24:27.004133 | orchestrator | Wednesday 03 September 2025 00:24:19 +0000 (0:00:01.160) 0:00:16.123 *** 2025-09-03 00:24:27.004144 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:24:27.004155 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:24:27.004166 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:24:27.004177 | orchestrator | ok: [testbed-manager] 2025-09-03 00:24:27.004188 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:24:27.004199 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:24:27.004210 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:24:27.004221 | orchestrator | 2025-09-03 00:24:27.004232 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-03 00:24:27.004244 | orchestrator | Wednesday 03 September 2025 00:24:20 +0000 (0:00:01.192) 0:00:17.316 *** 2025-09-03 00:24:27.004273 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:24:27.004285 | orchestrator | 2025-09-03 00:24:27.004296 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-03 00:24:27.004307 | orchestrator | Wednesday 03 September 2025 00:24:21 +0000 (0:00:00.440) 0:00:17.757 *** 2025-09-03 00:24:27.004318 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:24:27.004329 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:24:27.004362 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:24:27.004374 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:24:27.004384 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:24:27.004395 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:24:27.004406 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:24:27.004417 | orchestrator | 2025-09-03 00:24:27.004428 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-03 00:24:27.004439 | orchestrator | Wednesday 03 September 2025 00:24:22 +0000 (0:00:01.251) 0:00:19.008 *** 2025-09-03 00:24:27.004450 | orchestrator | ok: [testbed-manager] 2025-09-03 00:24:27.004461 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:24:27.004471 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:24:27.004482 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:24:27.004493 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:24:27.004504 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:24:27.004515 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:24:27.004525 | orchestrator | 2025-09-03 00:24:27.004537 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-03 00:24:27.004548 | orchestrator | Wednesday 03 September 2025 00:24:22 +0000 (0:00:00.236) 0:00:19.245 *** 2025-09-03 00:24:27.004558 | orchestrator | ok: [testbed-manager] 2025-09-03 00:24:27.004569 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:24:27.004580 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:24:27.004590 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:24:27.004601 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:24:27.004612 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:24:27.004622 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:24:27.004633 | orchestrator | 2025-09-03 00:24:27.004644 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-03 00:24:27.004655 | orchestrator | Wednesday 03 September 2025 00:24:23 +0000 (0:00:00.267) 0:00:19.512 *** 2025-09-03 00:24:27.004666 | orchestrator | ok: [testbed-manager] 2025-09-03 00:24:27.004677 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:24:27.004694 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:24:27.004705 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:24:27.004716 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:24:27.004726 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:24:27.004737 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:24:27.004748 | orchestrator | 2025-09-03 00:24:27.004759 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-03 00:24:27.004770 | orchestrator | Wednesday 03 September 2025 00:24:23 +0000 (0:00:00.205) 0:00:19.718 *** 2025-09-03 00:24:27.004822 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:24:27.004837 | orchestrator | 2025-09-03 00:24:27.004848 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-03 00:24:27.004859 | orchestrator | Wednesday 03 September 2025 00:24:23 +0000 (0:00:00.279) 0:00:19.997 *** 2025-09-03 00:24:27.004870 | orchestrator | ok: [testbed-manager] 2025-09-03 00:24:27.004881 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:24:27.004892 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:24:27.004903 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:24:27.004913 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:24:27.004924 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:24:27.004935 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:24:27.004946 | orchestrator | 2025-09-03 00:24:27.004961 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-03 00:24:27.004973 | orchestrator | Wednesday 03 September 2025 00:24:24 +0000 (0:00:00.563) 0:00:20.561 *** 2025-09-03 00:24:27.004984 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:24:27.004994 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:24:27.005006 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:24:27.005016 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:24:27.005027 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:24:27.005038 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:24:27.005049 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:24:27.005059 | orchestrator | 2025-09-03 00:24:27.005070 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-03 00:24:27.005081 | orchestrator | Wednesday 03 September 2025 00:24:24 +0000 (0:00:00.195) 0:00:20.757 *** 2025-09-03 00:24:27.005092 | orchestrator | ok: [testbed-manager] 2025-09-03 00:24:27.005103 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:24:27.005114 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:24:27.005125 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:24:27.005136 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:24:27.005147 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:24:27.005157 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:24:27.005168 | orchestrator | 2025-09-03 00:24:27.005179 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-03 00:24:27.005190 | orchestrator | Wednesday 03 September 2025 00:24:25 +0000 (0:00:01.020) 0:00:21.778 *** 2025-09-03 00:24:27.005201 | orchestrator | ok: [testbed-manager] 2025-09-03 00:24:27.005212 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:24:27.005223 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:24:27.005234 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:24:27.005245 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:24:27.005255 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:24:27.005266 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:24:27.005277 | orchestrator | 2025-09-03 00:24:27.005288 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-03 00:24:27.005299 | orchestrator | Wednesday 03 September 2025 00:24:25 +0000 (0:00:00.566) 0:00:22.344 *** 2025-09-03 00:24:27.005310 | orchestrator | ok: [testbed-manager] 2025-09-03 00:24:27.005321 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:24:27.005332 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:24:27.005359 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:24:27.005384 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:25:06.537585 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:25:06.537700 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:25:06.537716 | orchestrator | 2025-09-03 00:25:06.537730 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-03 00:25:06.537743 | orchestrator | Wednesday 03 September 2025 00:24:26 +0000 (0:00:01.098) 0:00:23.442 *** 2025-09-03 00:25:06.537754 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:25:06.537766 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:25:06.537777 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:25:06.537788 | orchestrator | changed: [testbed-manager] 2025-09-03 00:25:06.537799 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:25:06.537810 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:25:06.537821 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:25:06.537832 | orchestrator | 2025-09-03 00:25:06.537843 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-03 00:25:06.537855 | orchestrator | Wednesday 03 September 2025 00:24:44 +0000 (0:00:17.379) 0:00:40.822 *** 2025-09-03 00:25:06.537866 | orchestrator | ok: [testbed-manager] 2025-09-03 00:25:06.537877 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:25:06.537888 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:25:06.537899 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:25:06.537910 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:25:06.537921 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:25:06.537931 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:25:06.537942 | orchestrator | 2025-09-03 00:25:06.537954 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-03 00:25:06.537965 | orchestrator | Wednesday 03 September 2025 00:24:44 +0000 (0:00:00.215) 0:00:41.038 *** 2025-09-03 00:25:06.537976 | orchestrator | ok: [testbed-manager] 2025-09-03 00:25:06.537987 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:25:06.537998 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:25:06.538008 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:25:06.538069 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:25:06.538082 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:25:06.538095 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:25:06.538108 | orchestrator | 2025-09-03 00:25:06.538121 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-03 00:25:06.538135 | orchestrator | Wednesday 03 September 2025 00:24:44 +0000 (0:00:00.210) 0:00:41.248 *** 2025-09-03 00:25:06.538150 | orchestrator | ok: [testbed-manager] 2025-09-03 00:25:06.538162 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:25:06.538176 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:25:06.538189 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:25:06.538202 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:25:06.538214 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:25:06.538227 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:25:06.538240 | orchestrator | 2025-09-03 00:25:06.538253 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-03 00:25:06.538266 | orchestrator | Wednesday 03 September 2025 00:24:44 +0000 (0:00:00.193) 0:00:41.442 *** 2025-09-03 00:25:06.538281 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:25:06.538297 | orchestrator | 2025-09-03 00:25:06.538311 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-03 00:25:06.538324 | orchestrator | Wednesday 03 September 2025 00:24:45 +0000 (0:00:00.279) 0:00:41.722 *** 2025-09-03 00:25:06.538337 | orchestrator | ok: [testbed-manager] 2025-09-03 00:25:06.538372 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:25:06.538385 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:25:06.538398 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:25:06.538411 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:25:06.538425 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:25:06.538466 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:25:06.538478 | orchestrator | 2025-09-03 00:25:06.538489 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-03 00:25:06.538500 | orchestrator | Wednesday 03 September 2025 00:24:46 +0000 (0:00:01.546) 0:00:43.268 *** 2025-09-03 00:25:06.538525 | orchestrator | changed: [testbed-manager] 2025-09-03 00:25:06.538536 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:25:06.538547 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:25:06.538558 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:25:06.538569 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:25:06.538579 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:25:06.538590 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:25:06.538601 | orchestrator | 2025-09-03 00:25:06.538612 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-03 00:25:06.538623 | orchestrator | Wednesday 03 September 2025 00:24:47 +0000 (0:00:01.064) 0:00:44.333 *** 2025-09-03 00:25:06.538635 | orchestrator | ok: [testbed-manager] 2025-09-03 00:25:06.538646 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:25:06.538657 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:25:06.538668 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:25:06.538679 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:25:06.538690 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:25:06.538701 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:25:06.538711 | orchestrator | 2025-09-03 00:25:06.538723 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-03 00:25:06.538734 | orchestrator | Wednesday 03 September 2025 00:24:48 +0000 (0:00:00.786) 0:00:45.119 *** 2025-09-03 00:25:06.538746 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:25:06.538758 | orchestrator | 2025-09-03 00:25:06.538769 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-03 00:25:06.538781 | orchestrator | Wednesday 03 September 2025 00:24:48 +0000 (0:00:00.262) 0:00:45.382 *** 2025-09-03 00:25:06.538792 | orchestrator | changed: [testbed-manager] 2025-09-03 00:25:06.538803 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:25:06.538814 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:25:06.538825 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:25:06.538836 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:25:06.538847 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:25:06.538857 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:25:06.538869 | orchestrator | 2025-09-03 00:25:06.538897 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-03 00:25:06.538909 | orchestrator | Wednesday 03 September 2025 00:24:49 +0000 (0:00:00.954) 0:00:46.336 *** 2025-09-03 00:25:06.538920 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:25:06.538931 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:25:06.538942 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:25:06.538953 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:25:06.538963 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:25:06.538974 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:25:06.538984 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:25:06.538995 | orchestrator | 2025-09-03 00:25:06.539006 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-03 00:25:06.539017 | orchestrator | Wednesday 03 September 2025 00:24:50 +0000 (0:00:00.332) 0:00:46.669 *** 2025-09-03 00:25:06.539027 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:25:06.539038 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:25:06.539049 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:25:06.539059 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:25:06.539070 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:25:06.539080 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:25:06.539091 | orchestrator | changed: [testbed-manager] 2025-09-03 00:25:06.539111 | orchestrator | 2025-09-03 00:25:06.539122 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-03 00:25:06.539133 | orchestrator | Wednesday 03 September 2025 00:25:01 +0000 (0:00:11.359) 0:00:58.028 *** 2025-09-03 00:25:06.539144 | orchestrator | ok: [testbed-manager] 2025-09-03 00:25:06.539155 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:25:06.539166 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:25:06.539176 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:25:06.539187 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:25:06.539198 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:25:06.539208 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:25:06.539219 | orchestrator | 2025-09-03 00:25:06.539230 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-03 00:25:06.539241 | orchestrator | Wednesday 03 September 2025 00:25:02 +0000 (0:00:01.074) 0:00:59.103 *** 2025-09-03 00:25:06.539252 | orchestrator | ok: [testbed-manager] 2025-09-03 00:25:06.539263 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:25:06.539273 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:25:06.539284 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:25:06.539295 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:25:06.539305 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:25:06.539316 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:25:06.539326 | orchestrator | 2025-09-03 00:25:06.539337 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-03 00:25:06.539362 | orchestrator | Wednesday 03 September 2025 00:25:03 +0000 (0:00:00.874) 0:00:59.978 *** 2025-09-03 00:25:06.539374 | orchestrator | ok: [testbed-manager] 2025-09-03 00:25:06.539385 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:25:06.539395 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:25:06.539406 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:25:06.539417 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:25:06.539428 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:25:06.539439 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:25:06.539449 | orchestrator | 2025-09-03 00:25:06.539460 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-03 00:25:06.539471 | orchestrator | Wednesday 03 September 2025 00:25:03 +0000 (0:00:00.226) 0:01:00.204 *** 2025-09-03 00:25:06.539482 | orchestrator | ok: [testbed-manager] 2025-09-03 00:25:06.539493 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:25:06.539503 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:25:06.539514 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:25:06.539525 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:25:06.539536 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:25:06.539546 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:25:06.539557 | orchestrator | 2025-09-03 00:25:06.539568 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-03 00:25:06.539579 | orchestrator | Wednesday 03 September 2025 00:25:03 +0000 (0:00:00.206) 0:01:00.411 *** 2025-09-03 00:25:06.539590 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:25:06.539602 | orchestrator | 2025-09-03 00:25:06.539613 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-03 00:25:06.539624 | orchestrator | Wednesday 03 September 2025 00:25:04 +0000 (0:00:00.252) 0:01:00.663 *** 2025-09-03 00:25:06.539634 | orchestrator | ok: [testbed-manager] 2025-09-03 00:25:06.539645 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:25:06.539656 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:25:06.539667 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:25:06.539677 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:25:06.539688 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:25:06.539698 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:25:06.539709 | orchestrator | 2025-09-03 00:25:06.539720 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-03 00:25:06.539731 | orchestrator | Wednesday 03 September 2025 00:25:05 +0000 (0:00:01.569) 0:01:02.232 *** 2025-09-03 00:25:06.539748 | orchestrator | changed: [testbed-manager] 2025-09-03 00:25:06.539759 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:25:06.539769 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:25:06.539780 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:25:06.539791 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:25:06.539802 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:25:06.539812 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:25:06.539823 | orchestrator | 2025-09-03 00:25:06.539834 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-03 00:25:06.539845 | orchestrator | Wednesday 03 September 2025 00:25:06 +0000 (0:00:00.533) 0:01:02.766 *** 2025-09-03 00:25:06.539856 | orchestrator | ok: [testbed-manager] 2025-09-03 00:25:06.539866 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:25:06.539877 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:25:06.539888 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:25:06.539898 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:25:06.539909 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:25:06.539919 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:25:06.539930 | orchestrator | 2025-09-03 00:25:06.539948 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-03 00:27:24.288640 | orchestrator | Wednesday 03 September 2025 00:25:06 +0000 (0:00:00.210) 0:01:02.976 *** 2025-09-03 00:27:24.288767 | orchestrator | ok: [testbed-manager] 2025-09-03 00:27:24.288785 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:27:24.288796 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:27:24.288807 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:27:24.288817 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:27:24.288828 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:27:24.288839 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:27:24.288850 | orchestrator | 2025-09-03 00:27:24.288862 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-03 00:27:24.288874 | orchestrator | Wednesday 03 September 2025 00:25:07 +0000 (0:00:01.171) 0:01:04.147 *** 2025-09-03 00:27:24.288885 | orchestrator | changed: [testbed-manager] 2025-09-03 00:27:24.288896 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:27:24.288907 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:27:24.288918 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:27:24.288928 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:27:24.288939 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:27:24.288949 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:27:24.288960 | orchestrator | 2025-09-03 00:27:24.288972 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-03 00:27:24.288983 | orchestrator | Wednesday 03 September 2025 00:25:09 +0000 (0:00:01.991) 0:01:06.139 *** 2025-09-03 00:27:24.288993 | orchestrator | ok: [testbed-manager] 2025-09-03 00:27:24.289004 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:27:24.289015 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:27:24.289025 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:27:24.289036 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:27:24.289047 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:27:24.289057 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:27:24.289068 | orchestrator | 2025-09-03 00:27:24.289079 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-03 00:27:24.289089 | orchestrator | Wednesday 03 September 2025 00:25:11 +0000 (0:00:02.174) 0:01:08.314 *** 2025-09-03 00:27:24.289100 | orchestrator | ok: [testbed-manager] 2025-09-03 00:27:24.289111 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:27:24.289121 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:27:24.289132 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:27:24.289142 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:27:24.289153 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:27:24.289165 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:27:24.289179 | orchestrator | 2025-09-03 00:27:24.289191 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-03 00:27:24.289233 | orchestrator | Wednesday 03 September 2025 00:25:48 +0000 (0:00:36.999) 0:01:45.313 *** 2025-09-03 00:27:24.289263 | orchestrator | changed: [testbed-manager] 2025-09-03 00:27:24.289276 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:27:24.289288 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:27:24.289301 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:27:24.289313 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:27:24.289326 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:27:24.289339 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:27:24.289351 | orchestrator | 2025-09-03 00:27:24.289364 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-03 00:27:24.289376 | orchestrator | Wednesday 03 September 2025 00:27:06 +0000 (0:01:17.368) 0:03:02.681 *** 2025-09-03 00:27:24.289412 | orchestrator | ok: [testbed-manager] 2025-09-03 00:27:24.289425 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:27:24.289437 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:27:24.289450 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:27:24.289463 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:27:24.289475 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:27:24.289488 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:27:24.289501 | orchestrator | 2025-09-03 00:27:24.289514 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-03 00:27:24.289526 | orchestrator | Wednesday 03 September 2025 00:27:07 +0000 (0:00:01.577) 0:03:04.259 *** 2025-09-03 00:27:24.289537 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:27:24.289548 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:27:24.289559 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:27:24.289575 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:27:24.289586 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:27:24.289596 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:27:24.289607 | orchestrator | changed: [testbed-manager] 2025-09-03 00:27:24.289618 | orchestrator | 2025-09-03 00:27:24.289629 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-03 00:27:24.289639 | orchestrator | Wednesday 03 September 2025 00:27:18 +0000 (0:00:10.967) 0:03:15.227 *** 2025-09-03 00:27:24.289659 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-03 00:27:24.289676 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-03 00:27:24.289713 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-03 00:27:24.289727 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-03 00:27:24.289748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-03 00:27:24.289759 | orchestrator | 2025-09-03 00:27:24.289770 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-03 00:27:24.289781 | orchestrator | Wednesday 03 September 2025 00:27:19 +0000 (0:00:00.313) 0:03:15.540 *** 2025-09-03 00:27:24.289792 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-03 00:27:24.289803 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:27:24.289814 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-03 00:27:24.289825 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:27:24.289836 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-03 00:27:24.289846 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:27:24.289857 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-03 00:27:24.289868 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:27:24.289879 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-03 00:27:24.289890 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-03 00:27:24.289900 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-03 00:27:24.289911 | orchestrator | 2025-09-03 00:27:24.289922 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-03 00:27:24.289933 | orchestrator | Wednesday 03 September 2025 00:27:19 +0000 (0:00:00.560) 0:03:16.101 *** 2025-09-03 00:27:24.289944 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-03 00:27:24.289955 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-03 00:27:24.289966 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-03 00:27:24.289977 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-03 00:27:24.289987 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-03 00:27:24.290003 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-03 00:27:24.290061 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-03 00:27:24.290076 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-03 00:27:24.290087 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-03 00:27:24.290097 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-03 00:27:24.290108 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:27:24.290119 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-03 00:27:24.290129 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-03 00:27:24.290140 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-03 00:27:24.290151 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-03 00:27:24.290161 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-03 00:27:24.290172 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-03 00:27:24.290190 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-03 00:27:24.290200 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-03 00:27:24.290211 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-03 00:27:24.290222 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-03 00:27:24.290240 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-03 00:27:26.365958 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-03 00:27:26.366130 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-03 00:27:26.366147 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-03 00:27:26.366160 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-03 00:27:26.366171 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-03 00:27:26.366182 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-03 00:27:26.366194 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-03 00:27:26.366205 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-03 00:27:26.366216 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-03 00:27:26.366226 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-03 00:27:26.366237 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-03 00:27:26.366248 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-03 00:27:26.366259 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-03 00:27:26.366270 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-03 00:27:26.366281 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:27:26.366293 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-03 00:27:26.366304 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-03 00:27:26.366316 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:27:26.366327 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-03 00:27:26.366338 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-03 00:27:26.366350 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-03 00:27:26.366361 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:27:26.366372 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-03 00:27:26.366412 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-03 00:27:26.366424 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-03 00:27:26.366434 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-03 00:27:26.366445 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-03 00:27:26.366475 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-03 00:27:26.366489 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-03 00:27:26.366524 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-03 00:27:26.366538 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-03 00:27:26.366550 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-03 00:27:26.366564 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-03 00:27:26.366577 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-03 00:27:26.366589 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-03 00:27:26.366602 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-03 00:27:26.366615 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-03 00:27:26.366628 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-03 00:27:26.366640 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-03 00:27:26.366653 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-03 00:27:26.366666 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-03 00:27:26.366678 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-03 00:27:26.366692 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-03 00:27:26.366725 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-03 00:27:26.366739 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-03 00:27:26.366752 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-03 00:27:26.366764 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-03 00:27:26.366777 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-03 00:27:26.366789 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-03 00:27:26.366802 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-03 00:27:26.366816 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-03 00:27:26.366828 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-03 00:27:26.366841 | orchestrator | 2025-09-03 00:27:26.366854 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-03 00:27:26.366865 | orchestrator | Wednesday 03 September 2025 00:27:24 +0000 (0:00:04.620) 0:03:20.721 *** 2025-09-03 00:27:26.366876 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-03 00:27:26.366887 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-03 00:27:26.366898 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-03 00:27:26.366908 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-03 00:27:26.366919 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-03 00:27:26.366930 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-03 00:27:26.366941 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-03 00:27:26.366952 | orchestrator | 2025-09-03 00:27:26.366963 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-03 00:27:26.366981 | orchestrator | Wednesday 03 September 2025 00:27:24 +0000 (0:00:00.597) 0:03:21.319 *** 2025-09-03 00:27:26.366992 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-03 00:27:26.367003 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:27:26.367015 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-03 00:27:26.367026 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-03 00:27:26.367036 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:27:26.367047 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-03 00:27:26.367058 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:27:26.367069 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:27:26.367080 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-03 00:27:26.367091 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-03 00:27:26.367102 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-03 00:27:26.367113 | orchestrator | 2025-09-03 00:27:26.367132 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-03 00:27:26.367143 | orchestrator | Wednesday 03 September 2025 00:27:25 +0000 (0:00:00.551) 0:03:21.871 *** 2025-09-03 00:27:26.367154 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-03 00:27:26.367165 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-03 00:27:26.367176 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:27:26.367187 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-03 00:27:26.367198 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:27:26.367209 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-03 00:27:26.367219 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:27:26.367230 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:27:26.367241 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-03 00:27:26.367252 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-03 00:27:26.367263 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-03 00:27:26.367274 | orchestrator | 2025-09-03 00:27:26.367285 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-03 00:27:26.367296 | orchestrator | Wednesday 03 September 2025 00:27:26 +0000 (0:00:00.668) 0:03:22.539 *** 2025-09-03 00:27:26.367306 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:27:26.367317 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:27:26.367328 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:27:26.367339 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:27:26.367350 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:27:26.367367 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:27:37.829765 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:27:37.829925 | orchestrator | 2025-09-03 00:27:37.829958 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-03 00:27:37.829983 | orchestrator | Wednesday 03 September 2025 00:27:26 +0000 (0:00:00.272) 0:03:22.812 *** 2025-09-03 00:27:37.830004 | orchestrator | ok: [testbed-manager] 2025-09-03 00:27:37.830097 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:27:37.830118 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:27:37.830138 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:27:37.830197 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:27:37.830217 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:27:37.830238 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:27:37.830258 | orchestrator | 2025-09-03 00:27:37.830282 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-03 00:27:37.830305 | orchestrator | Wednesday 03 September 2025 00:27:32 +0000 (0:00:05.712) 0:03:28.525 *** 2025-09-03 00:27:37.830329 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-03 00:27:37.830351 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-03 00:27:37.830373 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:27:37.830424 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:27:37.830445 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-03 00:27:37.830468 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-03 00:27:37.830490 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:27:37.830511 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-03 00:27:37.830534 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:27:37.830557 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-03 00:27:37.830579 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:27:37.830607 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:27:37.830627 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-03 00:27:37.830645 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:27:37.830663 | orchestrator | 2025-09-03 00:27:37.830681 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-03 00:27:37.830700 | orchestrator | Wednesday 03 September 2025 00:27:32 +0000 (0:00:00.280) 0:03:28.806 *** 2025-09-03 00:27:37.830718 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-03 00:27:37.830736 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-03 00:27:37.830753 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-03 00:27:37.830770 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-03 00:27:37.830787 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-03 00:27:37.830804 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-03 00:27:37.830820 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-03 00:27:37.830836 | orchestrator | 2025-09-03 00:27:37.830853 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-03 00:27:37.830871 | orchestrator | Wednesday 03 September 2025 00:27:33 +0000 (0:00:01.032) 0:03:29.838 *** 2025-09-03 00:27:37.830891 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:27:37.830911 | orchestrator | 2025-09-03 00:27:37.830929 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-03 00:27:37.830945 | orchestrator | Wednesday 03 September 2025 00:27:33 +0000 (0:00:00.471) 0:03:30.309 *** 2025-09-03 00:27:37.830964 | orchestrator | ok: [testbed-manager] 2025-09-03 00:27:37.830981 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:27:37.830998 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:27:37.831015 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:27:37.831034 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:27:37.831052 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:27:37.831072 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:27:37.831092 | orchestrator | 2025-09-03 00:27:37.831130 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-03 00:27:37.831151 | orchestrator | Wednesday 03 September 2025 00:27:35 +0000 (0:00:01.170) 0:03:31.480 *** 2025-09-03 00:27:37.831169 | orchestrator | ok: [testbed-manager] 2025-09-03 00:27:37.831186 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:27:37.831205 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:27:37.831223 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:27:37.831241 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:27:37.831254 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:27:37.831278 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:27:37.831289 | orchestrator | 2025-09-03 00:27:37.831300 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-03 00:27:37.831311 | orchestrator | Wednesday 03 September 2025 00:27:35 +0000 (0:00:00.643) 0:03:32.123 *** 2025-09-03 00:27:37.831322 | orchestrator | changed: [testbed-manager] 2025-09-03 00:27:37.831333 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:27:37.831343 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:27:37.831354 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:27:37.831364 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:27:37.831375 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:27:37.831417 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:27:37.831436 | orchestrator | 2025-09-03 00:27:37.831457 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-03 00:27:37.831475 | orchestrator | Wednesday 03 September 2025 00:27:36 +0000 (0:00:00.600) 0:03:32.724 *** 2025-09-03 00:27:37.831490 | orchestrator | ok: [testbed-manager] 2025-09-03 00:27:37.831501 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:27:37.831511 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:27:37.831522 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:27:37.831532 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:27:37.831543 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:27:37.831553 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:27:37.831564 | orchestrator | 2025-09-03 00:27:37.831575 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-03 00:27:37.831586 | orchestrator | Wednesday 03 September 2025 00:27:36 +0000 (0:00:00.604) 0:03:33.328 *** 2025-09-03 00:27:37.831627 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756857837.0541375, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:27:37.831644 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756857865.3546393, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:27:37.831656 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756857860.9789648, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:27:37.831668 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756857871.745914, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:27:37.831687 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756857876.194095, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:27:37.831708 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756857872.1652732, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:27:37.831720 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756857857.0392003, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:27:37.831749 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:27:53.802380 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:27:53.802546 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:27:53.802562 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:27:53.802604 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:27:53.802615 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:27:53.802626 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:27:53.802636 | orchestrator | 2025-09-03 00:27:53.802649 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-03 00:27:53.802660 | orchestrator | Wednesday 03 September 2025 00:27:37 +0000 (0:00:00.933) 0:03:34.261 *** 2025-09-03 00:27:53.802671 | orchestrator | changed: [testbed-manager] 2025-09-03 00:27:53.802681 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:27:53.802691 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:27:53.802701 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:27:53.802710 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:27:53.802720 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:27:53.802729 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:27:53.802739 | orchestrator | 2025-09-03 00:27:53.802749 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-03 00:27:53.802759 | orchestrator | Wednesday 03 September 2025 00:27:38 +0000 (0:00:01.088) 0:03:35.350 *** 2025-09-03 00:27:53.802768 | orchestrator | changed: [testbed-manager] 2025-09-03 00:27:53.802778 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:27:53.802788 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:27:53.802797 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:27:53.802826 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:27:53.802836 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:27:53.802846 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:27:53.802855 | orchestrator | 2025-09-03 00:27:53.802865 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-03 00:27:53.802875 | orchestrator | Wednesday 03 September 2025 00:27:40 +0000 (0:00:01.148) 0:03:36.498 *** 2025-09-03 00:27:53.802886 | orchestrator | changed: [testbed-manager] 2025-09-03 00:27:53.802898 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:27:53.802909 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:27:53.802921 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:27:53.802931 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:27:53.802942 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:27:53.802953 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:27:53.802964 | orchestrator | 2025-09-03 00:27:53.802976 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-03 00:27:53.802989 | orchestrator | Wednesday 03 September 2025 00:27:41 +0000 (0:00:01.118) 0:03:37.617 *** 2025-09-03 00:27:53.803008 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:27:53.803019 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:27:53.803031 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:27:53.803041 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:27:53.803052 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:27:53.803064 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:27:53.803074 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:27:53.803085 | orchestrator | 2025-09-03 00:27:53.803097 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-03 00:27:53.803109 | orchestrator | Wednesday 03 September 2025 00:27:41 +0000 (0:00:00.253) 0:03:37.870 *** 2025-09-03 00:27:53.803121 | orchestrator | ok: [testbed-manager] 2025-09-03 00:27:53.803152 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:27:53.803164 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:27:53.803175 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:27:53.803187 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:27:53.803198 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:27:53.803209 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:27:53.803220 | orchestrator | 2025-09-03 00:27:53.803231 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-03 00:27:53.803242 | orchestrator | Wednesday 03 September 2025 00:27:42 +0000 (0:00:00.727) 0:03:38.598 *** 2025-09-03 00:27:53.803254 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:27:53.803266 | orchestrator | 2025-09-03 00:27:53.803276 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-03 00:27:53.803285 | orchestrator | Wednesday 03 September 2025 00:27:42 +0000 (0:00:00.414) 0:03:39.013 *** 2025-09-03 00:27:53.803295 | orchestrator | ok: [testbed-manager] 2025-09-03 00:27:53.803304 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:27:53.803314 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:27:53.803323 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:27:53.803333 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:27:53.803343 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:27:53.803352 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:27:53.803361 | orchestrator | 2025-09-03 00:27:53.803371 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-03 00:27:53.803381 | orchestrator | Wednesday 03 September 2025 00:27:50 +0000 (0:00:07.740) 0:03:46.753 *** 2025-09-03 00:27:53.803413 | orchestrator | ok: [testbed-manager] 2025-09-03 00:27:53.803428 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:27:53.803438 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:27:53.803448 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:27:53.803457 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:27:53.803466 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:27:53.803476 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:27:53.803486 | orchestrator | 2025-09-03 00:27:53.803496 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-03 00:27:53.803505 | orchestrator | Wednesday 03 September 2025 00:27:51 +0000 (0:00:01.395) 0:03:48.148 *** 2025-09-03 00:27:53.803515 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:27:53.803525 | orchestrator | ok: [testbed-manager] 2025-09-03 00:27:53.803534 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:27:53.803543 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:27:53.803553 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:27:53.803562 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:27:53.803572 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:27:53.803581 | orchestrator | 2025-09-03 00:27:53.803591 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-03 00:27:53.803600 | orchestrator | Wednesday 03 September 2025 00:27:52 +0000 (0:00:01.113) 0:03:49.262 *** 2025-09-03 00:27:53.803610 | orchestrator | ok: [testbed-manager] 2025-09-03 00:27:53.803629 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:27:53.803639 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:27:53.803648 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:27:53.803657 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:27:53.803667 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:27:53.803676 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:27:53.803685 | orchestrator | 2025-09-03 00:27:53.803695 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-03 00:27:53.803705 | orchestrator | Wednesday 03 September 2025 00:27:53 +0000 (0:00:00.297) 0:03:49.559 *** 2025-09-03 00:27:53.803715 | orchestrator | ok: [testbed-manager] 2025-09-03 00:27:53.803724 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:27:53.803733 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:27:53.803743 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:27:53.803752 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:27:53.803761 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:27:53.803771 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:27:53.803780 | orchestrator | 2025-09-03 00:27:53.803790 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-03 00:27:53.803799 | orchestrator | Wednesday 03 September 2025 00:27:53 +0000 (0:00:00.406) 0:03:49.966 *** 2025-09-03 00:27:53.803809 | orchestrator | ok: [testbed-manager] 2025-09-03 00:27:53.803818 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:27:53.803827 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:27:53.803837 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:27:53.803846 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:27:53.803862 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:29:02.663651 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:29:02.663771 | orchestrator | 2025-09-03 00:29:02.663788 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-03 00:29:02.663801 | orchestrator | Wednesday 03 September 2025 00:27:53 +0000 (0:00:00.279) 0:03:50.245 *** 2025-09-03 00:29:02.663812 | orchestrator | ok: [testbed-manager] 2025-09-03 00:29:02.663823 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:29:02.663834 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:29:02.663844 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:29:02.663855 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:29:02.663866 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:29:02.663877 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:29:02.663887 | orchestrator | 2025-09-03 00:29:02.663906 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-03 00:29:02.663926 | orchestrator | Wednesday 03 September 2025 00:27:59 +0000 (0:00:05.605) 0:03:55.850 *** 2025-09-03 00:29:02.663946 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:29:02.663967 | orchestrator | 2025-09-03 00:29:02.663986 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-03 00:29:02.664005 | orchestrator | Wednesday 03 September 2025 00:27:59 +0000 (0:00:00.414) 0:03:56.264 *** 2025-09-03 00:29:02.664024 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-03 00:29:02.664045 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-03 00:29:02.664064 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-03 00:29:02.664081 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-03 00:29:02.664093 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:29:02.664104 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-03 00:29:02.664116 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-03 00:29:02.664127 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:29:02.664138 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-03 00:29:02.664148 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-03 00:29:02.664159 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:29:02.664198 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:29:02.664212 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-03 00:29:02.664226 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-03 00:29:02.664239 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-03 00:29:02.664252 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-03 00:29:02.664266 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:29:02.664278 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:29:02.664291 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-03 00:29:02.664303 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-03 00:29:02.664316 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:29:02.664329 | orchestrator | 2025-09-03 00:29:02.664342 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-03 00:29:02.664356 | orchestrator | Wednesday 03 September 2025 00:28:00 +0000 (0:00:00.340) 0:03:56.605 *** 2025-09-03 00:29:02.664384 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:29:02.664398 | orchestrator | 2025-09-03 00:29:02.664411 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-03 00:29:02.664451 | orchestrator | Wednesday 03 September 2025 00:28:00 +0000 (0:00:00.390) 0:03:56.996 *** 2025-09-03 00:29:02.664464 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-03 00:29:02.664477 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:29:02.664490 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-03 00:29:02.664502 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-03 00:29:02.664515 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:29:02.664529 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:29:02.664540 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-03 00:29:02.664551 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-03 00:29:02.664562 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:29:02.664572 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:29:02.664583 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-03 00:29:02.664594 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:29:02.664604 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-03 00:29:02.664615 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:29:02.664626 | orchestrator | 2025-09-03 00:29:02.664637 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-03 00:29:02.664648 | orchestrator | Wednesday 03 September 2025 00:28:00 +0000 (0:00:00.304) 0:03:57.301 *** 2025-09-03 00:29:02.664658 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:29:02.664669 | orchestrator | 2025-09-03 00:29:02.664680 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-03 00:29:02.664691 | orchestrator | Wednesday 03 September 2025 00:28:01 +0000 (0:00:00.404) 0:03:57.705 *** 2025-09-03 00:29:02.664702 | orchestrator | changed: [testbed-manager] 2025-09-03 00:29:02.664732 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:29:02.664743 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:29:02.664754 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:29:02.664765 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:29:02.664775 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:29:02.664786 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:29:02.664797 | orchestrator | 2025-09-03 00:29:02.664808 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-03 00:29:02.664827 | orchestrator | Wednesday 03 September 2025 00:28:35 +0000 (0:00:34.595) 0:04:32.301 *** 2025-09-03 00:29:02.664838 | orchestrator | changed: [testbed-manager] 2025-09-03 00:29:02.664848 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:29:02.664859 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:29:02.664870 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:29:02.664880 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:29:02.664891 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:29:02.664901 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:29:02.664912 | orchestrator | 2025-09-03 00:29:02.664923 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-03 00:29:02.664934 | orchestrator | Wednesday 03 September 2025 00:28:43 +0000 (0:00:07.863) 0:04:40.164 *** 2025-09-03 00:29:02.664944 | orchestrator | changed: [testbed-manager] 2025-09-03 00:29:02.664955 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:29:02.664966 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:29:02.664976 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:29:02.664987 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:29:02.664997 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:29:02.665008 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:29:02.665019 | orchestrator | 2025-09-03 00:29:02.665029 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-03 00:29:02.665046 | orchestrator | Wednesday 03 September 2025 00:28:50 +0000 (0:00:07.189) 0:04:47.354 *** 2025-09-03 00:29:02.665065 | orchestrator | ok: [testbed-manager] 2025-09-03 00:29:02.665083 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:29:02.665102 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:29:02.665119 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:29:02.665138 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:29:02.665157 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:29:02.665172 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:29:02.665183 | orchestrator | 2025-09-03 00:29:02.665194 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-03 00:29:02.665209 | orchestrator | Wednesday 03 September 2025 00:28:52 +0000 (0:00:01.803) 0:04:49.157 *** 2025-09-03 00:29:02.665228 | orchestrator | changed: [testbed-manager] 2025-09-03 00:29:02.665245 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:29:02.665272 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:29:02.665294 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:29:02.665313 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:29:02.665333 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:29:02.665350 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:29:02.665366 | orchestrator | 2025-09-03 00:29:02.665378 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-03 00:29:02.665389 | orchestrator | Wednesday 03 September 2025 00:28:58 +0000 (0:00:06.099) 0:04:55.256 *** 2025-09-03 00:29:02.665400 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:29:02.665437 | orchestrator | 2025-09-03 00:29:02.665451 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-03 00:29:02.665470 | orchestrator | Wednesday 03 September 2025 00:28:59 +0000 (0:00:00.484) 0:04:55.741 *** 2025-09-03 00:29:02.665481 | orchestrator | changed: [testbed-manager] 2025-09-03 00:29:02.665492 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:29:02.665503 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:29:02.665513 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:29:02.665524 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:29:02.665535 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:29:02.665546 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:29:02.665557 | orchestrator | 2025-09-03 00:29:02.665568 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-03 00:29:02.665588 | orchestrator | Wednesday 03 September 2025 00:28:59 +0000 (0:00:00.692) 0:04:56.434 *** 2025-09-03 00:29:02.665600 | orchestrator | ok: [testbed-manager] 2025-09-03 00:29:02.665611 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:29:02.665622 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:29:02.665632 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:29:02.665643 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:29:02.665654 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:29:02.665665 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:29:02.665676 | orchestrator | 2025-09-03 00:29:02.665687 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-03 00:29:02.665698 | orchestrator | Wednesday 03 September 2025 00:29:01 +0000 (0:00:01.641) 0:04:58.076 *** 2025-09-03 00:29:02.665709 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:29:02.665720 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:29:02.665731 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:29:02.665742 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:29:02.665752 | orchestrator | changed: [testbed-manager] 2025-09-03 00:29:02.665763 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:29:02.665774 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:29:02.665785 | orchestrator | 2025-09-03 00:29:02.665796 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-03 00:29:02.665807 | orchestrator | Wednesday 03 September 2025 00:29:02 +0000 (0:00:00.785) 0:04:58.862 *** 2025-09-03 00:29:02.665818 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:29:02.665829 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:29:02.665840 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:29:02.665850 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:29:02.665861 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:29:02.665872 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:29:02.665883 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:29:02.665894 | orchestrator | 2025-09-03 00:29:02.665905 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-03 00:29:02.665925 | orchestrator | Wednesday 03 September 2025 00:29:02 +0000 (0:00:00.239) 0:04:59.101 *** 2025-09-03 00:29:28.136138 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:29:28.136268 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:29:28.136284 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:29:28.136296 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:29:28.136308 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:29:28.136319 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:29:28.136330 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:29:28.136341 | orchestrator | 2025-09-03 00:29:28.136354 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-03 00:29:28.136367 | orchestrator | Wednesday 03 September 2025 00:29:02 +0000 (0:00:00.336) 0:04:59.438 *** 2025-09-03 00:29:28.136378 | orchestrator | ok: [testbed-manager] 2025-09-03 00:29:28.136390 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:29:28.136401 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:29:28.136411 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:29:28.136475 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:29:28.136488 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:29:28.136499 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:29:28.136510 | orchestrator | 2025-09-03 00:29:28.136522 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-03 00:29:28.136533 | orchestrator | Wednesday 03 September 2025 00:29:03 +0000 (0:00:00.274) 0:04:59.713 *** 2025-09-03 00:29:28.136544 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:29:28.136555 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:29:28.136566 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:29:28.136577 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:29:28.136588 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:29:28.136598 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:29:28.136609 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:29:28.136648 | orchestrator | 2025-09-03 00:29:28.136662 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-03 00:29:28.136676 | orchestrator | Wednesday 03 September 2025 00:29:03 +0000 (0:00:00.261) 0:04:59.974 *** 2025-09-03 00:29:28.136689 | orchestrator | ok: [testbed-manager] 2025-09-03 00:29:28.136702 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:29:28.136714 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:29:28.136726 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:29:28.136739 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:29:28.136751 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:29:28.136764 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:29:28.136776 | orchestrator | 2025-09-03 00:29:28.136788 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-03 00:29:28.136800 | orchestrator | Wednesday 03 September 2025 00:29:03 +0000 (0:00:00.282) 0:05:00.256 *** 2025-09-03 00:29:28.136813 | orchestrator | ok: [testbed-manager] =>  2025-09-03 00:29:28.136826 | orchestrator |  docker_version: 5:27.5.1 2025-09-03 00:29:28.136839 | orchestrator | ok: [testbed-node-0] =>  2025-09-03 00:29:28.136851 | orchestrator |  docker_version: 5:27.5.1 2025-09-03 00:29:28.136864 | orchestrator | ok: [testbed-node-1] =>  2025-09-03 00:29:28.136876 | orchestrator |  docker_version: 5:27.5.1 2025-09-03 00:29:28.136888 | orchestrator | ok: [testbed-node-2] =>  2025-09-03 00:29:28.136901 | orchestrator |  docker_version: 5:27.5.1 2025-09-03 00:29:28.136913 | orchestrator | ok: [testbed-node-3] =>  2025-09-03 00:29:28.136925 | orchestrator |  docker_version: 5:27.5.1 2025-09-03 00:29:28.136938 | orchestrator | ok: [testbed-node-4] =>  2025-09-03 00:29:28.136950 | orchestrator |  docker_version: 5:27.5.1 2025-09-03 00:29:28.136964 | orchestrator | ok: [testbed-node-5] =>  2025-09-03 00:29:28.136976 | orchestrator |  docker_version: 5:27.5.1 2025-09-03 00:29:28.136989 | orchestrator | 2025-09-03 00:29:28.137002 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-03 00:29:28.137016 | orchestrator | Wednesday 03 September 2025 00:29:04 +0000 (0:00:00.267) 0:05:00.524 *** 2025-09-03 00:29:28.137028 | orchestrator | ok: [testbed-manager] =>  2025-09-03 00:29:28.137039 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-03 00:29:28.137049 | orchestrator | ok: [testbed-node-0] =>  2025-09-03 00:29:28.137060 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-03 00:29:28.137070 | orchestrator | ok: [testbed-node-1] =>  2025-09-03 00:29:28.137081 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-03 00:29:28.137091 | orchestrator | ok: [testbed-node-2] =>  2025-09-03 00:29:28.137102 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-03 00:29:28.137112 | orchestrator | ok: [testbed-node-3] =>  2025-09-03 00:29:28.137123 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-03 00:29:28.137134 | orchestrator | ok: [testbed-node-4] =>  2025-09-03 00:29:28.137144 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-03 00:29:28.137155 | orchestrator | ok: [testbed-node-5] =>  2025-09-03 00:29:28.137166 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-03 00:29:28.137176 | orchestrator | 2025-09-03 00:29:28.137187 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-03 00:29:28.137198 | orchestrator | Wednesday 03 September 2025 00:29:04 +0000 (0:00:00.271) 0:05:00.795 *** 2025-09-03 00:29:28.137208 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:29:28.137219 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:29:28.137229 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:29:28.137240 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:29:28.137250 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:29:28.137261 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:29:28.137272 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:29:28.137282 | orchestrator | 2025-09-03 00:29:28.137293 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-03 00:29:28.137304 | orchestrator | Wednesday 03 September 2025 00:29:04 +0000 (0:00:00.277) 0:05:01.073 *** 2025-09-03 00:29:28.137314 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:29:28.137333 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:29:28.137344 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:29:28.137354 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:29:28.137365 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:29:28.137375 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:29:28.137386 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:29:28.137397 | orchestrator | 2025-09-03 00:29:28.137407 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-03 00:29:28.137418 | orchestrator | Wednesday 03 September 2025 00:29:04 +0000 (0:00:00.266) 0:05:01.340 *** 2025-09-03 00:29:28.137469 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:29:28.137485 | orchestrator | 2025-09-03 00:29:28.137496 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-03 00:29:28.137507 | orchestrator | Wednesday 03 September 2025 00:29:05 +0000 (0:00:00.401) 0:05:01.741 *** 2025-09-03 00:29:28.137518 | orchestrator | ok: [testbed-manager] 2025-09-03 00:29:28.137529 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:29:28.137540 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:29:28.137551 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:29:28.137562 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:29:28.137572 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:29:28.137583 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:29:28.137594 | orchestrator | 2025-09-03 00:29:28.137605 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-03 00:29:28.137616 | orchestrator | Wednesday 03 September 2025 00:29:06 +0000 (0:00:00.827) 0:05:02.569 *** 2025-09-03 00:29:28.137627 | orchestrator | ok: [testbed-manager] 2025-09-03 00:29:28.137637 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:29:28.137648 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:29:28.137659 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:29:28.137669 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:29:28.137680 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:29:28.137691 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:29:28.137701 | orchestrator | 2025-09-03 00:29:28.137712 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-03 00:29:28.137724 | orchestrator | Wednesday 03 September 2025 00:29:09 +0000 (0:00:03.226) 0:05:05.795 *** 2025-09-03 00:29:28.137735 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-03 00:29:28.137746 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-03 00:29:28.137757 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-03 00:29:28.137768 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-03 00:29:28.137779 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-03 00:29:28.137789 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-03 00:29:28.137800 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:29:28.137811 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-03 00:29:28.137821 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-03 00:29:28.137832 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-03 00:29:28.137843 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:29:28.137874 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-03 00:29:28.137885 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-03 00:29:28.137895 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-03 00:29:28.137906 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:29:28.137917 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-03 00:29:28.137928 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-03 00:29:28.137939 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-03 00:29:28.137958 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:29:28.137969 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-03 00:29:28.137979 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-03 00:29:28.137990 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-03 00:29:28.138001 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:29:28.138012 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:29:28.138089 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-03 00:29:28.138102 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-03 00:29:28.138112 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-03 00:29:28.138123 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:29:28.138134 | orchestrator | 2025-09-03 00:29:28.138145 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-03 00:29:28.138156 | orchestrator | Wednesday 03 September 2025 00:29:09 +0000 (0:00:00.552) 0:05:06.348 *** 2025-09-03 00:29:28.138167 | orchestrator | ok: [testbed-manager] 2025-09-03 00:29:28.138178 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:29:28.138188 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:29:28.138199 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:29:28.138210 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:29:28.138221 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:29:28.138231 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:29:28.138242 | orchestrator | 2025-09-03 00:29:28.138253 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-03 00:29:28.138264 | orchestrator | Wednesday 03 September 2025 00:29:16 +0000 (0:00:06.167) 0:05:12.515 *** 2025-09-03 00:29:28.138275 | orchestrator | ok: [testbed-manager] 2025-09-03 00:29:28.138286 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:29:28.138296 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:29:28.138307 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:29:28.138318 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:29:28.138329 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:29:28.138339 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:29:28.138350 | orchestrator | 2025-09-03 00:29:28.138361 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-03 00:29:28.138372 | orchestrator | Wednesday 03 September 2025 00:29:17 +0000 (0:00:01.275) 0:05:13.790 *** 2025-09-03 00:29:28.138383 | orchestrator | ok: [testbed-manager] 2025-09-03 00:29:28.138394 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:29:28.138405 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:29:28.138416 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:29:28.138443 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:29:28.138454 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:29:28.138464 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:29:28.138475 | orchestrator | 2025-09-03 00:29:28.138486 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-03 00:29:28.138497 | orchestrator | Wednesday 03 September 2025 00:29:24 +0000 (0:00:07.535) 0:05:21.326 *** 2025-09-03 00:29:28.138508 | orchestrator | changed: [testbed-manager] 2025-09-03 00:29:28.138519 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:29:28.138530 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:29:28.138550 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:30:11.080983 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:30:11.081083 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:30:11.081094 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:30:11.081103 | orchestrator | 2025-09-03 00:30:11.081113 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-03 00:30:11.081123 | orchestrator | Wednesday 03 September 2025 00:29:28 +0000 (0:00:03.247) 0:05:24.574 *** 2025-09-03 00:30:11.081131 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:11.081140 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:30:11.081148 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:30:11.081178 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:30:11.081186 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:30:11.081194 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:30:11.081202 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:30:11.081210 | orchestrator | 2025-09-03 00:30:11.081218 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-03 00:30:11.081226 | orchestrator | Wednesday 03 September 2025 00:29:29 +0000 (0:00:01.305) 0:05:25.879 *** 2025-09-03 00:30:11.081234 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:11.081242 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:30:11.081250 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:30:11.081258 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:30:11.081265 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:30:11.081273 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:30:11.081281 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:30:11.081289 | orchestrator | 2025-09-03 00:30:11.081297 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-03 00:30:11.081305 | orchestrator | Wednesday 03 September 2025 00:29:30 +0000 (0:00:01.305) 0:05:27.184 *** 2025-09-03 00:30:11.081313 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:30:11.081321 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:30:11.081329 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:30:11.081336 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:30:11.081344 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:30:11.081352 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:30:11.081360 | orchestrator | changed: [testbed-manager] 2025-09-03 00:30:11.081367 | orchestrator | 2025-09-03 00:30:11.081375 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-03 00:30:11.081384 | orchestrator | Wednesday 03 September 2025 00:29:31 +0000 (0:00:00.879) 0:05:28.064 *** 2025-09-03 00:30:11.081391 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:11.081400 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:30:11.081408 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:30:11.081415 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:30:11.081423 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:30:11.081431 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:30:11.081472 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:30:11.081481 | orchestrator | 2025-09-03 00:30:11.081489 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-03 00:30:11.081497 | orchestrator | Wednesday 03 September 2025 00:29:41 +0000 (0:00:09.563) 0:05:37.627 *** 2025-09-03 00:30:11.081505 | orchestrator | changed: [testbed-manager] 2025-09-03 00:30:11.081513 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:30:11.081520 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:30:11.081528 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:30:11.081538 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:30:11.081549 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:30:11.081558 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:30:11.081568 | orchestrator | 2025-09-03 00:30:11.081578 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-03 00:30:11.081598 | orchestrator | Wednesday 03 September 2025 00:29:42 +0000 (0:00:00.877) 0:05:38.505 *** 2025-09-03 00:30:11.081608 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:11.081618 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:30:11.081627 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:30:11.081636 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:30:11.081645 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:30:11.081656 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:30:11.081665 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:30:11.081675 | orchestrator | 2025-09-03 00:30:11.081684 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-03 00:30:11.081693 | orchestrator | Wednesday 03 September 2025 00:29:50 +0000 (0:00:08.381) 0:05:46.886 *** 2025-09-03 00:30:11.081710 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:11.081720 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:30:11.081730 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:30:11.081739 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:30:11.081749 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:30:11.081758 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:30:11.081768 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:30:11.081777 | orchestrator | 2025-09-03 00:30:11.081787 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-03 00:30:11.081796 | orchestrator | Wednesday 03 September 2025 00:30:00 +0000 (0:00:10.505) 0:05:57.392 *** 2025-09-03 00:30:11.081806 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-03 00:30:11.081816 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-03 00:30:11.081825 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-03 00:30:11.081835 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-03 00:30:11.081844 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-03 00:30:11.081854 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-03 00:30:11.081863 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-03 00:30:11.081872 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-03 00:30:11.081882 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-03 00:30:11.081891 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-03 00:30:11.081901 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-03 00:30:11.081909 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-03 00:30:11.081917 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-03 00:30:11.081925 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-03 00:30:11.081933 | orchestrator | 2025-09-03 00:30:11.081941 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-03 00:30:11.081963 | orchestrator | Wednesday 03 September 2025 00:30:02 +0000 (0:00:01.146) 0:05:58.538 *** 2025-09-03 00:30:11.081972 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:30:11.081980 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:30:11.081988 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:30:11.081996 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:30:11.082004 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:30:11.082012 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:30:11.082059 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:30:11.082068 | orchestrator | 2025-09-03 00:30:11.082076 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-03 00:30:11.082084 | orchestrator | Wednesday 03 September 2025 00:30:02 +0000 (0:00:00.536) 0:05:59.074 *** 2025-09-03 00:30:11.082092 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:11.082100 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:30:11.082108 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:30:11.082115 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:30:11.082123 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:30:11.082131 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:30:11.082139 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:30:11.082147 | orchestrator | 2025-09-03 00:30:11.082155 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-03 00:30:11.082164 | orchestrator | Wednesday 03 September 2025 00:30:06 +0000 (0:00:03.881) 0:06:02.956 *** 2025-09-03 00:30:11.082171 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:30:11.082179 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:30:11.082187 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:30:11.082195 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:30:11.082203 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:30:11.082211 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:30:11.082218 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:30:11.082231 | orchestrator | 2025-09-03 00:30:11.082240 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-03 00:30:11.082249 | orchestrator | Wednesday 03 September 2025 00:30:06 +0000 (0:00:00.495) 0:06:03.451 *** 2025-09-03 00:30:11.082256 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-03 00:30:11.082265 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-03 00:30:11.082273 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:30:11.082281 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-03 00:30:11.082289 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-03 00:30:11.082296 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:30:11.082304 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-03 00:30:11.082312 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-03 00:30:11.082320 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:30:11.082328 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-03 00:30:11.082336 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-03 00:30:11.082344 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:30:11.082351 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-03 00:30:11.082359 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-03 00:30:11.082367 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:30:11.082375 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-03 00:30:11.082387 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-03 00:30:11.082395 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:30:11.082403 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-03 00:30:11.082411 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-03 00:30:11.082419 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:30:11.082426 | orchestrator | 2025-09-03 00:30:11.082448 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-03 00:30:11.082457 | orchestrator | Wednesday 03 September 2025 00:30:07 +0000 (0:00:00.762) 0:06:04.214 *** 2025-09-03 00:30:11.082465 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:30:11.082473 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:30:11.082481 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:30:11.082489 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:30:11.082497 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:30:11.082505 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:30:11.082513 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:30:11.082521 | orchestrator | 2025-09-03 00:30:11.082529 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-03 00:30:11.082537 | orchestrator | Wednesday 03 September 2025 00:30:08 +0000 (0:00:00.522) 0:06:04.736 *** 2025-09-03 00:30:11.082545 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:30:11.082553 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:30:11.082561 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:30:11.082569 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:30:11.082577 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:30:11.082585 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:30:11.082593 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:30:11.082601 | orchestrator | 2025-09-03 00:30:11.082609 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-03 00:30:11.082617 | orchestrator | Wednesday 03 September 2025 00:30:08 +0000 (0:00:00.505) 0:06:05.242 *** 2025-09-03 00:30:11.082625 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:30:11.082633 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:30:11.082641 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:30:11.082649 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:30:11.082657 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:30:11.082670 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:30:11.082678 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:30:11.082686 | orchestrator | 2025-09-03 00:30:11.082694 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-03 00:30:11.082702 | orchestrator | Wednesday 03 September 2025 00:30:09 +0000 (0:00:00.588) 0:06:05.830 *** 2025-09-03 00:30:11.082711 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:11.082724 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:30:33.344353 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:30:33.344575 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:30:33.344605 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:30:33.344626 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:30:33.344646 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:30:33.344667 | orchestrator | 2025-09-03 00:30:33.344690 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-03 00:30:33.344712 | orchestrator | Wednesday 03 September 2025 00:30:11 +0000 (0:00:01.695) 0:06:07.525 *** 2025-09-03 00:30:33.344733 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:30:33.344759 | orchestrator | 2025-09-03 00:30:33.344781 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-03 00:30:33.344802 | orchestrator | Wednesday 03 September 2025 00:30:12 +0000 (0:00:00.991) 0:06:08.517 *** 2025-09-03 00:30:33.344822 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:33.344844 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:30:33.344866 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:30:33.344887 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:30:33.344907 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:30:33.344929 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:30:33.344950 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:30:33.344971 | orchestrator | 2025-09-03 00:30:33.344992 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-03 00:30:33.345013 | orchestrator | Wednesday 03 September 2025 00:30:12 +0000 (0:00:00.804) 0:06:09.322 *** 2025-09-03 00:30:33.345034 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:33.345056 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:30:33.345077 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:30:33.345096 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:30:33.345116 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:30:33.345136 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:30:33.345154 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:30:33.345173 | orchestrator | 2025-09-03 00:30:33.345191 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-03 00:30:33.345210 | orchestrator | Wednesday 03 September 2025 00:30:13 +0000 (0:00:00.788) 0:06:10.110 *** 2025-09-03 00:30:33.345228 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:33.345244 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:30:33.345261 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:30:33.345279 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:30:33.345296 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:30:33.345313 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:30:33.345330 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:30:33.345346 | orchestrator | 2025-09-03 00:30:33.345363 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-03 00:30:33.345382 | orchestrator | Wednesday 03 September 2025 00:30:15 +0000 (0:00:01.352) 0:06:11.462 *** 2025-09-03 00:30:33.345400 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:30:33.345417 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:30:33.345433 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:30:33.345482 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:30:33.345502 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:30:33.345520 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:30:33.345577 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:30:33.345600 | orchestrator | 2025-09-03 00:30:33.345619 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-03 00:30:33.345658 | orchestrator | Wednesday 03 September 2025 00:30:16 +0000 (0:00:01.526) 0:06:12.989 *** 2025-09-03 00:30:33.345678 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:33.345694 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:30:33.345710 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:30:33.345727 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:30:33.345745 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:30:33.345763 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:30:33.345783 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:30:33.345801 | orchestrator | 2025-09-03 00:30:33.345819 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-03 00:30:33.345830 | orchestrator | Wednesday 03 September 2025 00:30:17 +0000 (0:00:01.311) 0:06:14.301 *** 2025-09-03 00:30:33.345841 | orchestrator | changed: [testbed-manager] 2025-09-03 00:30:33.345852 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:30:33.345862 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:30:33.345874 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:30:33.345884 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:30:33.345895 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:30:33.345906 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:30:33.345916 | orchestrator | 2025-09-03 00:30:33.345927 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-03 00:30:33.345938 | orchestrator | Wednesday 03 September 2025 00:30:19 +0000 (0:00:01.390) 0:06:15.692 *** 2025-09-03 00:30:33.345950 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:30:33.345963 | orchestrator | 2025-09-03 00:30:33.345974 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-03 00:30:33.345984 | orchestrator | Wednesday 03 September 2025 00:30:20 +0000 (0:00:00.983) 0:06:16.675 *** 2025-09-03 00:30:33.345995 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:30:33.346006 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:33.346085 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:30:33.346100 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:30:33.346111 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:30:33.346121 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:30:33.346132 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:30:33.346143 | orchestrator | 2025-09-03 00:30:33.346154 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-03 00:30:33.346165 | orchestrator | Wednesday 03 September 2025 00:30:21 +0000 (0:00:01.379) 0:06:18.055 *** 2025-09-03 00:30:33.346175 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:33.346186 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:30:33.346220 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:30:33.346232 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:30:33.346243 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:30:33.346254 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:30:33.346264 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:30:33.346275 | orchestrator | 2025-09-03 00:30:33.346286 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-03 00:30:33.346296 | orchestrator | Wednesday 03 September 2025 00:30:22 +0000 (0:00:01.146) 0:06:19.201 *** 2025-09-03 00:30:33.346307 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:33.346318 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:30:33.346328 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:30:33.346339 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:30:33.346349 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:30:33.346360 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:30:33.346370 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:30:33.346381 | orchestrator | 2025-09-03 00:30:33.346392 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-03 00:30:33.346415 | orchestrator | Wednesday 03 September 2025 00:30:23 +0000 (0:00:01.100) 0:06:20.302 *** 2025-09-03 00:30:33.346427 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:33.346476 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:30:33.346490 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:30:33.346501 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:30:33.346513 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:30:33.346524 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:30:33.346535 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:30:33.346546 | orchestrator | 2025-09-03 00:30:33.346557 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-03 00:30:33.346568 | orchestrator | Wednesday 03 September 2025 00:30:24 +0000 (0:00:01.123) 0:06:21.425 *** 2025-09-03 00:30:33.346579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:30:33.346590 | orchestrator | 2025-09-03 00:30:33.346601 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-03 00:30:33.346612 | orchestrator | Wednesday 03 September 2025 00:30:26 +0000 (0:00:01.218) 0:06:22.643 *** 2025-09-03 00:30:33.346623 | orchestrator | 2025-09-03 00:30:33.346633 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-03 00:30:33.346644 | orchestrator | Wednesday 03 September 2025 00:30:26 +0000 (0:00:00.040) 0:06:22.684 *** 2025-09-03 00:30:33.346654 | orchestrator | 2025-09-03 00:30:33.346665 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-03 00:30:33.346676 | orchestrator | Wednesday 03 September 2025 00:30:26 +0000 (0:00:00.039) 0:06:22.723 *** 2025-09-03 00:30:33.346686 | orchestrator | 2025-09-03 00:30:33.346697 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-03 00:30:33.346708 | orchestrator | Wednesday 03 September 2025 00:30:26 +0000 (0:00:00.046) 0:06:22.770 *** 2025-09-03 00:30:33.346718 | orchestrator | 2025-09-03 00:30:33.346729 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-03 00:30:33.346740 | orchestrator | Wednesday 03 September 2025 00:30:26 +0000 (0:00:00.038) 0:06:22.808 *** 2025-09-03 00:30:33.346750 | orchestrator | 2025-09-03 00:30:33.346761 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-03 00:30:33.346772 | orchestrator | Wednesday 03 September 2025 00:30:26 +0000 (0:00:00.038) 0:06:22.847 *** 2025-09-03 00:30:33.346782 | orchestrator | 2025-09-03 00:30:33.346793 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-03 00:30:33.346804 | orchestrator | Wednesday 03 September 2025 00:30:26 +0000 (0:00:00.046) 0:06:22.893 *** 2025-09-03 00:30:33.346814 | orchestrator | 2025-09-03 00:30:33.346825 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-03 00:30:33.346836 | orchestrator | Wednesday 03 September 2025 00:30:26 +0000 (0:00:00.039) 0:06:22.933 *** 2025-09-03 00:30:33.346847 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:30:33.346857 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:30:33.346868 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:30:33.346879 | orchestrator | 2025-09-03 00:30:33.346889 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-03 00:30:33.346900 | orchestrator | Wednesday 03 September 2025 00:30:27 +0000 (0:00:01.111) 0:06:24.044 *** 2025-09-03 00:30:33.346911 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:30:33.346921 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:30:33.346932 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:30:33.346943 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:30:33.346953 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:30:33.346964 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:30:33.346974 | orchestrator | changed: [testbed-manager] 2025-09-03 00:30:33.346985 | orchestrator | 2025-09-03 00:30:33.346996 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-03 00:30:33.347016 | orchestrator | Wednesday 03 September 2025 00:30:29 +0000 (0:00:01.876) 0:06:25.920 *** 2025-09-03 00:30:33.347027 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:30:33.347038 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:30:33.347048 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:30:33.347059 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:30:33.347070 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:30:33.347080 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:30:33.347091 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:30:33.347102 | orchestrator | 2025-09-03 00:30:33.347112 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-03 00:30:33.347123 | orchestrator | Wednesday 03 September 2025 00:30:32 +0000 (0:00:02.748) 0:06:28.669 *** 2025-09-03 00:30:33.347134 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:30:33.347144 | orchestrator | 2025-09-03 00:30:33.347155 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-03 00:30:33.347166 | orchestrator | Wednesday 03 September 2025 00:30:32 +0000 (0:00:00.107) 0:06:28.776 *** 2025-09-03 00:30:33.347176 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:33.347187 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:30:33.347198 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:30:33.347208 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:30:33.347226 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:30:58.917223 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:30:58.917346 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:30:58.917361 | orchestrator | 2025-09-03 00:30:58.917394 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-03 00:30:58.917407 | orchestrator | Wednesday 03 September 2025 00:30:33 +0000 (0:00:01.009) 0:06:29.786 *** 2025-09-03 00:30:58.917420 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:30:58.917431 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:30:58.917508 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:30:58.917522 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:30:58.917534 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:30:58.917545 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:30:58.917556 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:30:58.917567 | orchestrator | 2025-09-03 00:30:58.917579 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-03 00:30:58.917590 | orchestrator | Wednesday 03 September 2025 00:30:33 +0000 (0:00:00.568) 0:06:30.355 *** 2025-09-03 00:30:58.917602 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:30:58.917617 | orchestrator | 2025-09-03 00:30:58.917628 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-03 00:30:58.917640 | orchestrator | Wednesday 03 September 2025 00:30:35 +0000 (0:00:01.119) 0:06:31.474 *** 2025-09-03 00:30:58.917652 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:58.917664 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:30:58.917675 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:30:58.917686 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:30:58.917697 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:30:58.917708 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:30:58.917719 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:30:58.917730 | orchestrator | 2025-09-03 00:30:58.917741 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-03 00:30:58.917755 | orchestrator | Wednesday 03 September 2025 00:30:35 +0000 (0:00:00.859) 0:06:32.334 *** 2025-09-03 00:30:58.917769 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-03 00:30:58.917782 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-03 00:30:58.917796 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-03 00:30:58.917834 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-03 00:30:58.917848 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-03 00:30:58.917860 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-03 00:30:58.917873 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-03 00:30:58.917886 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-03 00:30:58.917899 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-03 00:30:58.917912 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-03 00:30:58.917925 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-03 00:30:58.917937 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-03 00:30:58.917949 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-03 00:30:58.917968 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-03 00:30:58.917981 | orchestrator | 2025-09-03 00:30:58.917995 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-03 00:30:58.918008 | orchestrator | Wednesday 03 September 2025 00:30:38 +0000 (0:00:02.431) 0:06:34.765 *** 2025-09-03 00:30:58.918080 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:30:58.918094 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:30:58.918106 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:30:58.918119 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:30:58.918132 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:30:58.918142 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:30:58.918153 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:30:58.918164 | orchestrator | 2025-09-03 00:30:58.918175 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-03 00:30:58.918186 | orchestrator | Wednesday 03 September 2025 00:30:38 +0000 (0:00:00.467) 0:06:35.233 *** 2025-09-03 00:30:58.918199 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:30:58.918212 | orchestrator | 2025-09-03 00:30:58.918223 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-03 00:30:58.918234 | orchestrator | Wednesday 03 September 2025 00:30:39 +0000 (0:00:00.943) 0:06:36.176 *** 2025-09-03 00:30:58.918245 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:58.918255 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:30:58.918266 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:30:58.918277 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:30:58.918288 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:30:58.918298 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:30:58.918309 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:30:58.918320 | orchestrator | 2025-09-03 00:30:58.918331 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-03 00:30:58.918341 | orchestrator | Wednesday 03 September 2025 00:30:40 +0000 (0:00:00.804) 0:06:36.980 *** 2025-09-03 00:30:58.918353 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:58.918363 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:30:58.918374 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:30:58.918385 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:30:58.918395 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:30:58.918406 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:30:58.918417 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:30:58.918427 | orchestrator | 2025-09-03 00:30:58.918439 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-03 00:30:58.918488 | orchestrator | Wednesday 03 September 2025 00:30:41 +0000 (0:00:00.786) 0:06:37.767 *** 2025-09-03 00:30:58.918500 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:30:58.918511 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:30:58.918522 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:30:58.918548 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:30:58.918560 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:30:58.918571 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:30:58.918581 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:30:58.918592 | orchestrator | 2025-09-03 00:30:58.918603 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-03 00:30:58.918614 | orchestrator | Wednesday 03 September 2025 00:30:41 +0000 (0:00:00.463) 0:06:38.230 *** 2025-09-03 00:30:58.918625 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:30:58.918636 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:58.918647 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:30:58.918658 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:30:58.918668 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:30:58.918679 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:30:58.918690 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:30:58.918701 | orchestrator | 2025-09-03 00:30:58.918711 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-03 00:30:58.918722 | orchestrator | Wednesday 03 September 2025 00:30:43 +0000 (0:00:01.600) 0:06:39.830 *** 2025-09-03 00:30:58.918733 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:30:58.918744 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:30:58.918755 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:30:58.918766 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:30:58.918777 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:30:58.918787 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:30:58.918798 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:30:58.918809 | orchestrator | 2025-09-03 00:30:58.918820 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-03 00:30:58.918831 | orchestrator | Wednesday 03 September 2025 00:30:43 +0000 (0:00:00.444) 0:06:40.275 *** 2025-09-03 00:30:58.918842 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:58.918853 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:30:58.918863 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:30:58.918874 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:30:58.918885 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:30:58.918895 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:30:58.918906 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:30:58.918917 | orchestrator | 2025-09-03 00:30:58.918928 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-03 00:30:58.918939 | orchestrator | Wednesday 03 September 2025 00:30:51 +0000 (0:00:07.958) 0:06:48.234 *** 2025-09-03 00:30:58.918949 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:58.918960 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:30:58.918971 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:30:58.918982 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:30:58.918992 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:30:58.919003 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:30:58.919014 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:30:58.919025 | orchestrator | 2025-09-03 00:30:58.919036 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-03 00:30:58.919047 | orchestrator | Wednesday 03 September 2025 00:30:53 +0000 (0:00:01.311) 0:06:49.546 *** 2025-09-03 00:30:58.919058 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:58.919069 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:30:58.919079 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:30:58.919090 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:30:58.919101 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:30:58.919117 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:30:58.919128 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:30:58.919138 | orchestrator | 2025-09-03 00:30:58.919149 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-03 00:30:58.919160 | orchestrator | Wednesday 03 September 2025 00:30:54 +0000 (0:00:01.707) 0:06:51.254 *** 2025-09-03 00:30:58.919171 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:58.919189 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:30:58.919200 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:30:58.919211 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:30:58.919222 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:30:58.919233 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:30:58.919243 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:30:58.919254 | orchestrator | 2025-09-03 00:30:58.919265 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-03 00:30:58.919276 | orchestrator | Wednesday 03 September 2025 00:30:56 +0000 (0:00:01.869) 0:06:53.123 *** 2025-09-03 00:30:58.919287 | orchestrator | ok: [testbed-manager] 2025-09-03 00:30:58.919298 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:30:58.919309 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:30:58.919320 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:30:58.919330 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:30:58.919341 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:30:58.919352 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:30:58.919363 | orchestrator | 2025-09-03 00:30:58.919374 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-03 00:30:58.919385 | orchestrator | Wednesday 03 September 2025 00:30:57 +0000 (0:00:00.788) 0:06:53.912 *** 2025-09-03 00:30:58.919395 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:30:58.919406 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:30:58.919417 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:30:58.919428 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:30:58.919439 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:30:58.919465 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:30:58.919476 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:30:58.919486 | orchestrator | 2025-09-03 00:30:58.919497 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-03 00:30:58.919508 | orchestrator | Wednesday 03 September 2025 00:30:58 +0000 (0:00:00.924) 0:06:54.837 *** 2025-09-03 00:30:58.919519 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:30:58.919530 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:30:58.919541 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:30:58.919551 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:30:58.919562 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:30:58.919573 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:30:58.919584 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:30:58.919594 | orchestrator | 2025-09-03 00:30:58.919611 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-03 00:31:31.346157 | orchestrator | Wednesday 03 September 2025 00:30:58 +0000 (0:00:00.519) 0:06:55.357 *** 2025-09-03 00:31:31.346334 | orchestrator | ok: [testbed-manager] 2025-09-03 00:31:31.346353 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:31:31.346366 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:31:31.346378 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:31:31.346389 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:31:31.346400 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:31:31.346412 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:31:31.346423 | orchestrator | 2025-09-03 00:31:31.346435 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-03 00:31:31.346447 | orchestrator | Wednesday 03 September 2025 00:30:59 +0000 (0:00:00.524) 0:06:55.882 *** 2025-09-03 00:31:31.346501 | orchestrator | ok: [testbed-manager] 2025-09-03 00:31:31.346512 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:31:31.346524 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:31:31.346537 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:31:31.346556 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:31:31.346575 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:31:31.346593 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:31:31.346614 | orchestrator | 2025-09-03 00:31:31.346634 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-03 00:31:31.346648 | orchestrator | Wednesday 03 September 2025 00:30:59 +0000 (0:00:00.539) 0:06:56.421 *** 2025-09-03 00:31:31.346697 | orchestrator | ok: [testbed-manager] 2025-09-03 00:31:31.346710 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:31:31.346722 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:31:31.346735 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:31:31.346747 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:31:31.346760 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:31:31.346773 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:31:31.346785 | orchestrator | 2025-09-03 00:31:31.346798 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-03 00:31:31.346811 | orchestrator | Wednesday 03 September 2025 00:31:00 +0000 (0:00:00.518) 0:06:56.939 *** 2025-09-03 00:31:31.346824 | orchestrator | ok: [testbed-manager] 2025-09-03 00:31:31.346836 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:31:31.346849 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:31:31.346861 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:31:31.346873 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:31:31.346886 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:31:31.346899 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:31:31.346912 | orchestrator | 2025-09-03 00:31:31.346924 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-03 00:31:31.346937 | orchestrator | Wednesday 03 September 2025 00:31:06 +0000 (0:00:05.775) 0:07:02.715 *** 2025-09-03 00:31:31.346950 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:31:31.346964 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:31:31.346974 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:31:31.346985 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:31:31.346996 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:31:31.347007 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:31:31.347017 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:31:31.347028 | orchestrator | 2025-09-03 00:31:31.347039 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-03 00:31:31.347050 | orchestrator | Wednesday 03 September 2025 00:31:06 +0000 (0:00:00.607) 0:07:03.323 *** 2025-09-03 00:31:31.347082 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:31:31.347097 | orchestrator | 2025-09-03 00:31:31.347108 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-03 00:31:31.347119 | orchestrator | Wednesday 03 September 2025 00:31:07 +0000 (0:00:00.794) 0:07:04.117 *** 2025-09-03 00:31:31.347130 | orchestrator | ok: [testbed-manager] 2025-09-03 00:31:31.347140 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:31:31.347151 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:31:31.347161 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:31:31.347173 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:31:31.347183 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:31:31.347194 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:31:31.347204 | orchestrator | 2025-09-03 00:31:31.347215 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-03 00:31:31.347226 | orchestrator | Wednesday 03 September 2025 00:31:09 +0000 (0:00:02.024) 0:07:06.142 *** 2025-09-03 00:31:31.347236 | orchestrator | ok: [testbed-manager] 2025-09-03 00:31:31.347247 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:31:31.347258 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:31:31.347268 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:31:31.347279 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:31:31.347289 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:31:31.347300 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:31:31.347310 | orchestrator | 2025-09-03 00:31:31.347321 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-03 00:31:31.347331 | orchestrator | Wednesday 03 September 2025 00:31:10 +0000 (0:00:01.064) 0:07:07.206 *** 2025-09-03 00:31:31.347342 | orchestrator | ok: [testbed-manager] 2025-09-03 00:31:31.347353 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:31:31.347372 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:31:31.347383 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:31:31.347393 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:31:31.347404 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:31:31.347414 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:31:31.347424 | orchestrator | 2025-09-03 00:31:31.347435 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-03 00:31:31.347446 | orchestrator | Wednesday 03 September 2025 00:31:11 +0000 (0:00:00.814) 0:07:08.021 *** 2025-09-03 00:31:31.347490 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-03 00:31:31.347503 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-03 00:31:31.347515 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-03 00:31:31.347545 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-03 00:31:31.347557 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-03 00:31:31.347568 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-03 00:31:31.347579 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-03 00:31:31.347589 | orchestrator | 2025-09-03 00:31:31.347601 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-03 00:31:31.347611 | orchestrator | Wednesday 03 September 2025 00:31:13 +0000 (0:00:01.644) 0:07:09.666 *** 2025-09-03 00:31:31.347623 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:31:31.347634 | orchestrator | 2025-09-03 00:31:31.347645 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-03 00:31:31.347655 | orchestrator | Wednesday 03 September 2025 00:31:14 +0000 (0:00:01.060) 0:07:10.726 *** 2025-09-03 00:31:31.347666 | orchestrator | changed: [testbed-manager] 2025-09-03 00:31:31.347676 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:31:31.347687 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:31:31.347698 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:31:31.347709 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:31:31.347719 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:31:31.347730 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:31:31.347740 | orchestrator | 2025-09-03 00:31:31.347751 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-03 00:31:31.347762 | orchestrator | Wednesday 03 September 2025 00:31:22 +0000 (0:00:08.721) 0:07:19.448 *** 2025-09-03 00:31:31.347773 | orchestrator | ok: [testbed-manager] 2025-09-03 00:31:31.347783 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:31:31.347794 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:31:31.347805 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:31:31.347815 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:31:31.347826 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:31:31.347836 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:31:31.347847 | orchestrator | 2025-09-03 00:31:31.347858 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-03 00:31:31.347869 | orchestrator | Wednesday 03 September 2025 00:31:25 +0000 (0:00:02.783) 0:07:22.231 *** 2025-09-03 00:31:31.347879 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:31:31.347890 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:31:31.347908 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:31:31.347919 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:31:31.347929 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:31:31.347940 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:31:31.347950 | orchestrator | 2025-09-03 00:31:31.347961 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-03 00:31:31.347978 | orchestrator | Wednesday 03 September 2025 00:31:27 +0000 (0:00:01.236) 0:07:23.467 *** 2025-09-03 00:31:31.347989 | orchestrator | changed: [testbed-manager] 2025-09-03 00:31:31.348000 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:31:31.348011 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:31:31.348021 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:31:31.348032 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:31:31.348043 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:31:31.348054 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:31:31.348064 | orchestrator | 2025-09-03 00:31:31.348075 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-03 00:31:31.348086 | orchestrator | 2025-09-03 00:31:31.348097 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-03 00:31:31.348108 | orchestrator | Wednesday 03 September 2025 00:31:28 +0000 (0:00:01.204) 0:07:24.672 *** 2025-09-03 00:31:31.348119 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:31:31.348129 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:31:31.348140 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:31:31.348151 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:31:31.348162 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:31:31.348173 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:31:31.348183 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:31:31.348194 | orchestrator | 2025-09-03 00:31:31.348205 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-03 00:31:31.348216 | orchestrator | 2025-09-03 00:31:31.348226 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-03 00:31:31.348237 | orchestrator | Wednesday 03 September 2025 00:31:28 +0000 (0:00:00.396) 0:07:25.068 *** 2025-09-03 00:31:31.348248 | orchestrator | changed: [testbed-manager] 2025-09-03 00:31:31.348259 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:31:31.348270 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:31:31.348280 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:31:31.348291 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:31:31.348301 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:31:31.348312 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:31:31.348323 | orchestrator | 2025-09-03 00:31:31.348334 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-03 00:31:31.348345 | orchestrator | Wednesday 03 September 2025 00:31:29 +0000 (0:00:01.211) 0:07:26.280 *** 2025-09-03 00:31:31.348356 | orchestrator | ok: [testbed-manager] 2025-09-03 00:31:31.348366 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:31:31.348377 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:31:31.348388 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:31:31.348399 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:31:31.348409 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:31:31.348420 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:31:31.348431 | orchestrator | 2025-09-03 00:31:31.348442 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-03 00:31:31.348474 | orchestrator | Wednesday 03 September 2025 00:31:31 +0000 (0:00:01.502) 0:07:27.782 *** 2025-09-03 00:31:53.402260 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:31:53.402406 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:31:53.402422 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:31:53.402435 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:31:53.402446 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:31:53.402511 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:31:53.402523 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:31:53.402535 | orchestrator | 2025-09-03 00:31:53.402579 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-09-03 00:31:53.402593 | orchestrator | Wednesday 03 September 2025 00:31:31 +0000 (0:00:00.390) 0:07:28.173 *** 2025-09-03 00:31:53.402604 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:31:53.402617 | orchestrator | 2025-09-03 00:31:53.402629 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-03 00:31:53.402639 | orchestrator | Wednesday 03 September 2025 00:31:32 +0000 (0:00:00.793) 0:07:28.966 *** 2025-09-03 00:31:53.402653 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:31:53.402666 | orchestrator | 2025-09-03 00:31:53.402677 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-03 00:31:53.402688 | orchestrator | Wednesday 03 September 2025 00:31:33 +0000 (0:00:00.676) 0:07:29.642 *** 2025-09-03 00:31:53.402698 | orchestrator | changed: [testbed-manager] 2025-09-03 00:31:53.402710 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:31:53.402720 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:31:53.402731 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:31:53.402741 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:31:53.402752 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:31:53.402765 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:31:53.402778 | orchestrator | 2025-09-03 00:31:53.402790 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-03 00:31:53.402803 | orchestrator | Wednesday 03 September 2025 00:31:41 +0000 (0:00:08.171) 0:07:37.814 *** 2025-09-03 00:31:53.402816 | orchestrator | changed: [testbed-manager] 2025-09-03 00:31:53.402828 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:31:53.402840 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:31:53.402852 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:31:53.402865 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:31:53.402877 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:31:53.402889 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:31:53.402902 | orchestrator | 2025-09-03 00:31:53.402915 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-03 00:31:53.402928 | orchestrator | Wednesday 03 September 2025 00:31:42 +0000 (0:00:00.726) 0:07:38.541 *** 2025-09-03 00:31:53.402940 | orchestrator | changed: [testbed-manager] 2025-09-03 00:31:53.402952 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:31:53.402965 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:31:53.402977 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:31:53.402989 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:31:53.403001 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:31:53.403013 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:31:53.403025 | orchestrator | 2025-09-03 00:31:53.403038 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-03 00:31:53.403051 | orchestrator | Wednesday 03 September 2025 00:31:43 +0000 (0:00:01.336) 0:07:39.877 *** 2025-09-03 00:31:53.403063 | orchestrator | changed: [testbed-manager] 2025-09-03 00:31:53.403075 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:31:53.403087 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:31:53.403100 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:31:53.403113 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:31:53.403123 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:31:53.403134 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:31:53.403145 | orchestrator | 2025-09-03 00:31:53.403156 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-03 00:31:53.403167 | orchestrator | Wednesday 03 September 2025 00:31:45 +0000 (0:00:01.722) 0:07:41.599 *** 2025-09-03 00:31:53.403178 | orchestrator | changed: [testbed-manager] 2025-09-03 00:31:53.403198 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:31:53.403208 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:31:53.403224 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:31:53.403242 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:31:53.403261 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:31:53.403281 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:31:53.403299 | orchestrator | 2025-09-03 00:31:53.403318 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-03 00:31:53.403330 | orchestrator | Wednesday 03 September 2025 00:31:46 +0000 (0:00:01.220) 0:07:42.820 *** 2025-09-03 00:31:53.403341 | orchestrator | changed: [testbed-manager] 2025-09-03 00:31:53.403351 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:31:53.403362 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:31:53.403372 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:31:53.403383 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:31:53.403393 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:31:53.403403 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:31:53.403414 | orchestrator | 2025-09-03 00:31:53.403424 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-03 00:31:53.403435 | orchestrator | 2025-09-03 00:31:53.403446 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-03 00:31:53.403481 | orchestrator | Wednesday 03 September 2025 00:31:47 +0000 (0:00:01.246) 0:07:44.067 *** 2025-09-03 00:31:53.403492 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:31:53.403504 | orchestrator | 2025-09-03 00:31:53.403514 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-03 00:31:53.403546 | orchestrator | Wednesday 03 September 2025 00:31:48 +0000 (0:00:00.786) 0:07:44.854 *** 2025-09-03 00:31:53.403558 | orchestrator | ok: [testbed-manager] 2025-09-03 00:31:53.403570 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:31:53.403581 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:31:53.403592 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:31:53.403602 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:31:53.403613 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:31:53.403624 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:31:53.403634 | orchestrator | 2025-09-03 00:31:53.403645 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-03 00:31:53.403656 | orchestrator | Wednesday 03 September 2025 00:31:49 +0000 (0:00:00.804) 0:07:45.658 *** 2025-09-03 00:31:53.403667 | orchestrator | changed: [testbed-manager] 2025-09-03 00:31:53.403678 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:31:53.403688 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:31:53.403699 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:31:53.403709 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:31:53.403720 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:31:53.403731 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:31:53.403741 | orchestrator | 2025-09-03 00:31:53.403752 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-03 00:31:53.403763 | orchestrator | Wednesday 03 September 2025 00:31:50 +0000 (0:00:01.262) 0:07:46.920 *** 2025-09-03 00:31:53.403831 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:31:53.403844 | orchestrator | 2025-09-03 00:31:53.403854 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-03 00:31:53.403865 | orchestrator | Wednesday 03 September 2025 00:31:51 +0000 (0:00:00.816) 0:07:47.737 *** 2025-09-03 00:31:53.403876 | orchestrator | ok: [testbed-manager] 2025-09-03 00:31:53.403887 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:31:53.403898 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:31:53.403908 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:31:53.403919 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:31:53.403939 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:31:53.403950 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:31:53.403960 | orchestrator | 2025-09-03 00:31:53.403971 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-03 00:31:53.403982 | orchestrator | Wednesday 03 September 2025 00:31:52 +0000 (0:00:00.828) 0:07:48.565 *** 2025-09-03 00:31:53.403993 | orchestrator | changed: [testbed-manager] 2025-09-03 00:31:53.404004 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:31:53.404015 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:31:53.404025 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:31:53.404036 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:31:53.404047 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:31:53.404057 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:31:53.404068 | orchestrator | 2025-09-03 00:31:53.404079 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:31:53.404091 | orchestrator | testbed-manager : ok=163  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-09-03 00:31:53.404103 | orchestrator | testbed-node-0 : ok=171  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-03 00:31:53.404119 | orchestrator | testbed-node-1 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-03 00:31:53.404131 | orchestrator | testbed-node-2 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-03 00:31:53.404142 | orchestrator | testbed-node-3 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-03 00:31:53.404152 | orchestrator | testbed-node-4 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-03 00:31:53.404163 | orchestrator | testbed-node-5 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-03 00:31:53.404174 | orchestrator | 2025-09-03 00:31:53.404185 | orchestrator | 2025-09-03 00:31:53.404196 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:31:53.404207 | orchestrator | Wednesday 03 September 2025 00:31:53 +0000 (0:00:01.263) 0:07:49.829 *** 2025-09-03 00:31:53.404218 | orchestrator | =============================================================================== 2025-09-03 00:31:53.404229 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.37s 2025-09-03 00:31:53.404240 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.00s 2025-09-03 00:31:53.404251 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.60s 2025-09-03 00:31:53.404261 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.38s 2025-09-03 00:31:53.404272 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.36s 2025-09-03 00:31:53.404283 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.97s 2025-09-03 00:31:53.404295 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.51s 2025-09-03 00:31:53.404305 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.56s 2025-09-03 00:31:53.404316 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.72s 2025-09-03 00:31:53.404327 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.38s 2025-09-03 00:31:53.404345 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.17s 2025-09-03 00:31:53.769581 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.96s 2025-09-03 00:31:53.769704 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.86s 2025-09-03 00:31:53.769748 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.74s 2025-09-03 00:31:53.769761 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.54s 2025-09-03 00:31:53.769772 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.19s 2025-09-03 00:31:53.769782 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.17s 2025-09-03 00:31:53.769794 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.10s 2025-09-03 00:31:53.769805 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.78s 2025-09-03 00:31:53.769816 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.71s 2025-09-03 00:31:54.036936 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-03 00:31:54.037033 | orchestrator | + osism apply network 2025-09-03 00:32:06.516292 | orchestrator | 2025-09-03 00:32:06 | INFO  | Task bed60f61-d64b-4c7f-a635-87afb8bd00ee (network) was prepared for execution. 2025-09-03 00:32:06.516439 | orchestrator | 2025-09-03 00:32:06 | INFO  | It takes a moment until task bed60f61-d64b-4c7f-a635-87afb8bd00ee (network) has been started and output is visible here. 2025-09-03 00:32:33.888496 | orchestrator | 2025-09-03 00:32:33.888635 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-03 00:32:33.888650 | orchestrator | 2025-09-03 00:32:33.888662 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-03 00:32:33.888673 | orchestrator | Wednesday 03 September 2025 00:32:10 +0000 (0:00:00.270) 0:00:00.271 *** 2025-09-03 00:32:33.888683 | orchestrator | ok: [testbed-manager] 2025-09-03 00:32:33.888695 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:32:33.888705 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:32:33.888716 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:32:33.888726 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:32:33.888736 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:32:33.888746 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:32:33.888756 | orchestrator | 2025-09-03 00:32:33.888765 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-03 00:32:33.888775 | orchestrator | Wednesday 03 September 2025 00:32:11 +0000 (0:00:00.669) 0:00:00.940 *** 2025-09-03 00:32:33.888788 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:32:33.888801 | orchestrator | 2025-09-03 00:32:33.888811 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-03 00:32:33.888821 | orchestrator | Wednesday 03 September 2025 00:32:12 +0000 (0:00:01.220) 0:00:02.161 *** 2025-09-03 00:32:33.888830 | orchestrator | ok: [testbed-manager] 2025-09-03 00:32:33.888840 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:32:33.888850 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:32:33.888860 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:32:33.888869 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:32:33.888879 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:32:33.888888 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:32:33.888898 | orchestrator | 2025-09-03 00:32:33.888908 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-03 00:32:33.888917 | orchestrator | Wednesday 03 September 2025 00:32:14 +0000 (0:00:01.957) 0:00:04.118 *** 2025-09-03 00:32:33.888927 | orchestrator | ok: [testbed-manager] 2025-09-03 00:32:33.888937 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:32:33.888946 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:32:33.888956 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:32:33.888965 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:32:33.888975 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:32:33.888984 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:32:33.888994 | orchestrator | 2025-09-03 00:32:33.889004 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-03 00:32:33.889044 | orchestrator | Wednesday 03 September 2025 00:32:16 +0000 (0:00:01.677) 0:00:05.795 *** 2025-09-03 00:32:33.889055 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-03 00:32:33.889065 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-03 00:32:33.889075 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-03 00:32:33.889084 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-03 00:32:33.889094 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-03 00:32:33.889103 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-03 00:32:33.889112 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-03 00:32:33.889122 | orchestrator | 2025-09-03 00:32:33.889132 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-03 00:32:33.889141 | orchestrator | Wednesday 03 September 2025 00:32:17 +0000 (0:00:00.965) 0:00:06.761 *** 2025-09-03 00:32:33.889151 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-03 00:32:33.889161 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-03 00:32:33.889171 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-03 00:32:33.889180 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-03 00:32:33.889189 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-03 00:32:33.889199 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-03 00:32:33.889208 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-03 00:32:33.889218 | orchestrator | 2025-09-03 00:32:33.889228 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-03 00:32:33.889237 | orchestrator | Wednesday 03 September 2025 00:32:20 +0000 (0:00:03.178) 0:00:09.940 *** 2025-09-03 00:32:33.889247 | orchestrator | changed: [testbed-manager] 2025-09-03 00:32:33.889257 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:32:33.889266 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:32:33.889276 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:32:33.889285 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:32:33.889295 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:32:33.889304 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:32:33.889313 | orchestrator | 2025-09-03 00:32:33.889323 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-03 00:32:33.889333 | orchestrator | Wednesday 03 September 2025 00:32:21 +0000 (0:00:01.434) 0:00:11.374 *** 2025-09-03 00:32:33.889342 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-03 00:32:33.889352 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-03 00:32:33.889361 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-03 00:32:33.889371 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-03 00:32:33.889380 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-03 00:32:33.889390 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-03 00:32:33.889399 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-03 00:32:33.889409 | orchestrator | 2025-09-03 00:32:33.889418 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-03 00:32:33.889428 | orchestrator | Wednesday 03 September 2025 00:32:23 +0000 (0:00:01.973) 0:00:13.348 *** 2025-09-03 00:32:33.889437 | orchestrator | ok: [testbed-manager] 2025-09-03 00:32:33.889447 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:32:33.889473 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:32:33.889483 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:32:33.889493 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:32:33.889502 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:32:33.889512 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:32:33.889521 | orchestrator | 2025-09-03 00:32:33.889531 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-03 00:32:33.889559 | orchestrator | Wednesday 03 September 2025 00:32:24 +0000 (0:00:01.028) 0:00:14.376 *** 2025-09-03 00:32:33.889569 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:32:33.889579 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:32:33.889589 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:32:33.889606 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:32:33.889616 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:32:33.889626 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:32:33.889635 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:32:33.889645 | orchestrator | 2025-09-03 00:32:33.889655 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-03 00:32:33.889664 | orchestrator | Wednesday 03 September 2025 00:32:25 +0000 (0:00:00.618) 0:00:14.995 *** 2025-09-03 00:32:33.889674 | orchestrator | ok: [testbed-manager] 2025-09-03 00:32:33.889683 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:32:33.889693 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:32:33.889702 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:32:33.889712 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:32:33.889722 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:32:33.889731 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:32:33.889740 | orchestrator | 2025-09-03 00:32:33.889750 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-03 00:32:33.889760 | orchestrator | Wednesday 03 September 2025 00:32:27 +0000 (0:00:02.089) 0:00:17.084 *** 2025-09-03 00:32:33.889769 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:32:33.889779 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:32:33.889788 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:32:33.889798 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:32:33.889807 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:32:33.889817 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:32:33.889845 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-03 00:32:33.889858 | orchestrator | 2025-09-03 00:32:33.889867 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-03 00:32:33.889877 | orchestrator | Wednesday 03 September 2025 00:32:28 +0000 (0:00:00.867) 0:00:17.952 *** 2025-09-03 00:32:33.889887 | orchestrator | ok: [testbed-manager] 2025-09-03 00:32:33.889897 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:32:33.889906 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:32:33.889916 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:32:33.889925 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:32:33.889935 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:32:33.889944 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:32:33.889954 | orchestrator | 2025-09-03 00:32:33.889963 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-03 00:32:33.889973 | orchestrator | Wednesday 03 September 2025 00:32:29 +0000 (0:00:01.560) 0:00:19.512 *** 2025-09-03 00:32:33.889983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:32:33.889995 | orchestrator | 2025-09-03 00:32:33.890004 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-03 00:32:33.890066 | orchestrator | Wednesday 03 September 2025 00:32:31 +0000 (0:00:01.161) 0:00:20.673 *** 2025-09-03 00:32:33.890078 | orchestrator | ok: [testbed-manager] 2025-09-03 00:32:33.890088 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:32:33.890098 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:32:33.890107 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:32:33.890117 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:32:33.890127 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:32:33.890136 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:32:33.890146 | orchestrator | 2025-09-03 00:32:33.890156 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-03 00:32:33.890165 | orchestrator | Wednesday 03 September 2025 00:32:31 +0000 (0:00:00.917) 0:00:21.591 *** 2025-09-03 00:32:33.890175 | orchestrator | ok: [testbed-manager] 2025-09-03 00:32:33.890185 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:32:33.890194 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:32:33.890211 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:32:33.890221 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:32:33.890230 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:32:33.890240 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:32:33.890249 | orchestrator | 2025-09-03 00:32:33.890259 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-03 00:32:33.890269 | orchestrator | Wednesday 03 September 2025 00:32:32 +0000 (0:00:00.766) 0:00:22.358 *** 2025-09-03 00:32:33.890279 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-03 00:32:33.890288 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-03 00:32:33.890298 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-03 00:32:33.890307 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-03 00:32:33.890317 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-03 00:32:33.890327 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-03 00:32:33.890336 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-03 00:32:33.890346 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-03 00:32:33.890355 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-03 00:32:33.890365 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-03 00:32:33.890374 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-03 00:32:33.890384 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-03 00:32:33.890393 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-03 00:32:33.890403 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-03 00:32:33.890413 | orchestrator | 2025-09-03 00:32:33.890429 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-03 00:32:48.781245 | orchestrator | Wednesday 03 September 2025 00:32:33 +0000 (0:00:01.162) 0:00:23.520 *** 2025-09-03 00:32:48.781400 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:32:48.781417 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:32:48.781429 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:32:48.781469 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:32:48.781480 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:32:48.781492 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:32:48.781504 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:32:48.781516 | orchestrator | 2025-09-03 00:32:48.781528 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-03 00:32:48.781540 | orchestrator | Wednesday 03 September 2025 00:32:34 +0000 (0:00:00.605) 0:00:24.125 *** 2025-09-03 00:32:48.781554 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-0, testbed-node-2, testbed-node-5, testbed-node-3, testbed-node-4 2025-09-03 00:32:48.781569 | orchestrator | 2025-09-03 00:32:48.781581 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-03 00:32:48.781592 | orchestrator | Wednesday 03 September 2025 00:32:38 +0000 (0:00:04.327) 0:00:28.452 *** 2025-09-03 00:32:48.781626 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-03 00:32:48.781639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-03 00:32:48.781651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-03 00:32:48.781692 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-03 00:32:48.781707 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-03 00:32:48.781718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-03 00:32:48.781729 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-03 00:32:48.781741 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-03 00:32:48.781763 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-03 00:32:48.781776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-03 00:32:48.781790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-03 00:32:48.781822 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-03 00:32:48.781836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-03 00:32:48.781849 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-03 00:32:48.781861 | orchestrator | 2025-09-03 00:32:48.781874 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-03 00:32:48.781887 | orchestrator | Wednesday 03 September 2025 00:32:43 +0000 (0:00:04.594) 0:00:33.047 *** 2025-09-03 00:32:48.781900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-03 00:32:48.781927 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-03 00:32:48.781940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-03 00:32:48.781954 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-03 00:32:48.781966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-03 00:32:48.781979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-03 00:32:48.781993 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-03 00:32:48.782006 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-03 00:32:48.782074 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-03 00:32:48.782087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-03 00:32:48.782101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-03 00:32:48.782113 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-03 00:32:48.782138 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-03 00:32:54.834563 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-03 00:32:54.834690 | orchestrator | 2025-09-03 00:32:54.834708 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-03 00:32:54.834722 | orchestrator | Wednesday 03 September 2025 00:32:48 +0000 (0:00:05.364) 0:00:38.412 *** 2025-09-03 00:32:54.834764 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:32:54.834777 | orchestrator | 2025-09-03 00:32:54.834789 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-03 00:32:54.834800 | orchestrator | Wednesday 03 September 2025 00:32:49 +0000 (0:00:01.087) 0:00:39.500 *** 2025-09-03 00:32:54.834812 | orchestrator | ok: [testbed-manager] 2025-09-03 00:32:54.834824 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:32:54.834835 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:32:54.834846 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:32:54.834857 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:32:54.834868 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:32:54.834879 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:32:54.834891 | orchestrator | 2025-09-03 00:32:54.834902 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-03 00:32:54.834914 | orchestrator | Wednesday 03 September 2025 00:32:50 +0000 (0:00:01.091) 0:00:40.591 *** 2025-09-03 00:32:54.834925 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-03 00:32:54.834937 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-03 00:32:54.834948 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-03 00:32:54.834959 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-03 00:32:54.834970 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-03 00:32:54.834981 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-03 00:32:54.834992 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-03 00:32:54.835003 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-03 00:32:54.835014 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:32:54.835026 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-03 00:32:54.835040 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-03 00:32:54.835053 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-03 00:32:54.835065 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-03 00:32:54.835079 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:32:54.835093 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-03 00:32:54.835106 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-03 00:32:54.835119 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-03 00:32:54.835131 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-03 00:32:54.835144 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:32:54.835157 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-03 00:32:54.835170 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-03 00:32:54.835183 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-03 00:32:54.835196 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-03 00:32:54.835208 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:32:54.835240 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-03 00:32:54.835254 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-03 00:32:54.835267 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-03 00:32:54.835290 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-03 00:32:54.835303 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:32:54.835317 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:32:54.835330 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-03 00:32:54.835343 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-03 00:32:54.835355 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-03 00:32:54.835369 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-03 00:32:54.835382 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:32:54.835395 | orchestrator | 2025-09-03 00:32:54.835407 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-03 00:32:54.835463 | orchestrator | Wednesday 03 September 2025 00:32:53 +0000 (0:00:02.184) 0:00:42.776 *** 2025-09-03 00:32:54.835476 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:32:54.835487 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:32:54.835498 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:32:54.835509 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:32:54.835520 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:32:54.835531 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:32:54.835542 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:32:54.835553 | orchestrator | 2025-09-03 00:32:54.835564 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-03 00:32:54.835575 | orchestrator | Wednesday 03 September 2025 00:32:53 +0000 (0:00:00.628) 0:00:43.405 *** 2025-09-03 00:32:54.835586 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:32:54.835597 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:32:54.835608 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:32:54.835618 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:32:54.835629 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:32:54.835640 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:32:54.835651 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:32:54.835662 | orchestrator | 2025-09-03 00:32:54.835673 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:32:54.835690 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-03 00:32:54.835704 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-03 00:32:54.835716 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-03 00:32:54.835727 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-03 00:32:54.835738 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-03 00:32:54.835749 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-03 00:32:54.835760 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-03 00:32:54.835771 | orchestrator | 2025-09-03 00:32:54.835782 | orchestrator | 2025-09-03 00:32:54.835793 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:32:54.835804 | orchestrator | Wednesday 03 September 2025 00:32:54 +0000 (0:00:00.698) 0:00:44.103 *** 2025-09-03 00:32:54.835815 | orchestrator | =============================================================================== 2025-09-03 00:32:54.835833 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.36s 2025-09-03 00:32:54.835844 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.59s 2025-09-03 00:32:54.835855 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.33s 2025-09-03 00:32:54.835866 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.18s 2025-09-03 00:32:54.835877 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.18s 2025-09-03 00:32:54.835888 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.09s 2025-09-03 00:32:54.835899 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.97s 2025-09-03 00:32:54.835909 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.96s 2025-09-03 00:32:54.835920 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.68s 2025-09-03 00:32:54.835931 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.56s 2025-09-03 00:32:54.835942 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.43s 2025-09-03 00:32:54.835953 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.22s 2025-09-03 00:32:54.835964 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.16s 2025-09-03 00:32:54.835975 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.16s 2025-09-03 00:32:54.835986 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.09s 2025-09-03 00:32:54.835997 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.09s 2025-09-03 00:32:54.836007 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.03s 2025-09-03 00:32:54.836018 | orchestrator | osism.commons.network : Create required directories --------------------- 0.97s 2025-09-03 00:32:54.836029 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.92s 2025-09-03 00:32:54.836040 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.87s 2025-09-03 00:32:55.270953 | orchestrator | + osism apply wireguard 2025-09-03 00:33:07.173108 | orchestrator | 2025-09-03 00:33:07 | INFO  | Task b6b5bd39-5f73-4f41-9abe-cb8a62e3cad1 (wireguard) was prepared for execution. 2025-09-03 00:33:07.173231 | orchestrator | 2025-09-03 00:33:07 | INFO  | It takes a moment until task b6b5bd39-5f73-4f41-9abe-cb8a62e3cad1 (wireguard) has been started and output is visible here. 2025-09-03 00:33:25.302778 | orchestrator | 2025-09-03 00:33:25.302901 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-03 00:33:25.302917 | orchestrator | 2025-09-03 00:33:25.302928 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-03 00:33:25.302939 | orchestrator | Wednesday 03 September 2025 00:33:10 +0000 (0:00:00.202) 0:00:00.202 *** 2025-09-03 00:33:25.302950 | orchestrator | ok: [testbed-manager] 2025-09-03 00:33:25.302961 | orchestrator | 2025-09-03 00:33:25.302971 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-03 00:33:25.302981 | orchestrator | Wednesday 03 September 2025 00:33:11 +0000 (0:00:01.183) 0:00:01.386 *** 2025-09-03 00:33:25.302991 | orchestrator | changed: [testbed-manager] 2025-09-03 00:33:25.303001 | orchestrator | 2025-09-03 00:33:25.303011 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-03 00:33:25.303021 | orchestrator | Wednesday 03 September 2025 00:33:17 +0000 (0:00:05.914) 0:00:07.301 *** 2025-09-03 00:33:25.303031 | orchestrator | changed: [testbed-manager] 2025-09-03 00:33:25.303040 | orchestrator | 2025-09-03 00:33:25.303050 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-03 00:33:25.303059 | orchestrator | Wednesday 03 September 2025 00:33:18 +0000 (0:00:00.514) 0:00:07.816 *** 2025-09-03 00:33:25.303086 | orchestrator | changed: [testbed-manager] 2025-09-03 00:33:25.303121 | orchestrator | 2025-09-03 00:33:25.303131 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-03 00:33:25.303142 | orchestrator | Wednesday 03 September 2025 00:33:18 +0000 (0:00:00.450) 0:00:08.266 *** 2025-09-03 00:33:25.303152 | orchestrator | ok: [testbed-manager] 2025-09-03 00:33:25.303161 | orchestrator | 2025-09-03 00:33:25.303171 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-03 00:33:25.303181 | orchestrator | Wednesday 03 September 2025 00:33:19 +0000 (0:00:00.509) 0:00:08.775 *** 2025-09-03 00:33:25.303190 | orchestrator | ok: [testbed-manager] 2025-09-03 00:33:25.303200 | orchestrator | 2025-09-03 00:33:25.303209 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-03 00:33:25.303219 | orchestrator | Wednesday 03 September 2025 00:33:19 +0000 (0:00:00.527) 0:00:09.303 *** 2025-09-03 00:33:25.303228 | orchestrator | ok: [testbed-manager] 2025-09-03 00:33:25.303237 | orchestrator | 2025-09-03 00:33:25.303247 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-03 00:33:25.303256 | orchestrator | Wednesday 03 September 2025 00:33:20 +0000 (0:00:00.422) 0:00:09.725 *** 2025-09-03 00:33:25.303266 | orchestrator | changed: [testbed-manager] 2025-09-03 00:33:25.303275 | orchestrator | 2025-09-03 00:33:25.303285 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-03 00:33:25.303294 | orchestrator | Wednesday 03 September 2025 00:33:21 +0000 (0:00:01.185) 0:00:10.910 *** 2025-09-03 00:33:25.303304 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-03 00:33:25.303315 | orchestrator | changed: [testbed-manager] 2025-09-03 00:33:25.303327 | orchestrator | 2025-09-03 00:33:25.303338 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-03 00:33:25.303349 | orchestrator | Wednesday 03 September 2025 00:33:22 +0000 (0:00:00.831) 0:00:11.742 *** 2025-09-03 00:33:25.303361 | orchestrator | changed: [testbed-manager] 2025-09-03 00:33:25.303372 | orchestrator | 2025-09-03 00:33:25.303383 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-03 00:33:25.303395 | orchestrator | Wednesday 03 September 2025 00:33:23 +0000 (0:00:01.655) 0:00:13.398 *** 2025-09-03 00:33:25.303436 | orchestrator | changed: [testbed-manager] 2025-09-03 00:33:25.303448 | orchestrator | 2025-09-03 00:33:25.303458 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:33:25.303470 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:33:25.303482 | orchestrator | 2025-09-03 00:33:25.303493 | orchestrator | 2025-09-03 00:33:25.303505 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:33:25.303516 | orchestrator | Wednesday 03 September 2025 00:33:24 +0000 (0:00:01.009) 0:00:14.407 *** 2025-09-03 00:33:25.303527 | orchestrator | =============================================================================== 2025-09-03 00:33:25.303538 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.91s 2025-09-03 00:33:25.303549 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.66s 2025-09-03 00:33:25.303561 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.19s 2025-09-03 00:33:25.303572 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.18s 2025-09-03 00:33:25.303584 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.01s 2025-09-03 00:33:25.303595 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.83s 2025-09-03 00:33:25.303606 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.53s 2025-09-03 00:33:25.303617 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.51s 2025-09-03 00:33:25.303628 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.51s 2025-09-03 00:33:25.303640 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2025-09-03 00:33:25.303660 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2025-09-03 00:33:25.581998 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-03 00:33:25.615379 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-03 00:33:25.615475 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-03 00:33:25.697787 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 182 0 --:--:-- --:--:-- --:--:-- 182 2025-09-03 00:33:25.710829 | orchestrator | + osism apply --environment custom workarounds 2025-09-03 00:33:27.489901 | orchestrator | 2025-09-03 00:33:27 | INFO  | Trying to run play workarounds in environment custom 2025-09-03 00:33:37.725864 | orchestrator | 2025-09-03 00:33:37 | INFO  | Task 332940cc-4ca4-4984-a0fc-a6a6038421c0 (workarounds) was prepared for execution. 2025-09-03 00:33:37.725987 | orchestrator | 2025-09-03 00:33:37 | INFO  | It takes a moment until task 332940cc-4ca4-4984-a0fc-a6a6038421c0 (workarounds) has been started and output is visible here. 2025-09-03 00:34:01.379506 | orchestrator | 2025-09-03 00:34:01.379624 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 00:34:01.379641 | orchestrator | 2025-09-03 00:34:01.379653 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-03 00:34:01.379666 | orchestrator | Wednesday 03 September 2025 00:33:41 +0000 (0:00:00.146) 0:00:00.146 *** 2025-09-03 00:34:01.379678 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-03 00:34:01.379689 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-03 00:34:01.379736 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-03 00:34:01.379748 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-03 00:34:01.379760 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-03 00:34:01.379771 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-03 00:34:01.379782 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-03 00:34:01.379793 | orchestrator | 2025-09-03 00:34:01.379804 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-03 00:34:01.379815 | orchestrator | 2025-09-03 00:34:01.379827 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-03 00:34:01.379838 | orchestrator | Wednesday 03 September 2025 00:33:42 +0000 (0:00:00.582) 0:00:00.728 *** 2025-09-03 00:34:01.379849 | orchestrator | ok: [testbed-manager] 2025-09-03 00:34:01.379862 | orchestrator | 2025-09-03 00:34:01.379873 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-03 00:34:01.379884 | orchestrator | 2025-09-03 00:34:01.379895 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-03 00:34:01.379907 | orchestrator | Wednesday 03 September 2025 00:33:44 +0000 (0:00:02.063) 0:00:02.792 *** 2025-09-03 00:34:01.379918 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:34:01.379929 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:34:01.379940 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:34:01.379951 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:34:01.379962 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:34:01.379973 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:34:01.379984 | orchestrator | 2025-09-03 00:34:01.379996 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-03 00:34:01.380007 | orchestrator | 2025-09-03 00:34:01.380021 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-03 00:34:01.380035 | orchestrator | Wednesday 03 September 2025 00:33:45 +0000 (0:00:01.787) 0:00:04.580 *** 2025-09-03 00:34:01.380050 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-03 00:34:01.380064 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-03 00:34:01.380096 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-03 00:34:01.380111 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-03 00:34:01.380124 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-03 00:34:01.380137 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-03 00:34:01.380150 | orchestrator | 2025-09-03 00:34:01.380164 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-03 00:34:01.380177 | orchestrator | Wednesday 03 September 2025 00:33:47 +0000 (0:00:01.405) 0:00:05.985 *** 2025-09-03 00:34:01.380192 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:34:01.380206 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:34:01.380219 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:34:01.380232 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:34:01.380245 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:34:01.380258 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:34:01.380271 | orchestrator | 2025-09-03 00:34:01.380284 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-03 00:34:01.380298 | orchestrator | Wednesday 03 September 2025 00:33:51 +0000 (0:00:03.821) 0:00:09.807 *** 2025-09-03 00:34:01.380311 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:34:01.380324 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:34:01.380338 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:34:01.380352 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:34:01.380384 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:34:01.380396 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:34:01.380407 | orchestrator | 2025-09-03 00:34:01.380418 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-03 00:34:01.380429 | orchestrator | 2025-09-03 00:34:01.380440 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-03 00:34:01.380451 | orchestrator | Wednesday 03 September 2025 00:33:51 +0000 (0:00:00.654) 0:00:10.462 *** 2025-09-03 00:34:01.380462 | orchestrator | changed: [testbed-manager] 2025-09-03 00:34:01.380472 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:34:01.380484 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:34:01.380494 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:34:01.380505 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:34:01.380516 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:34:01.380527 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:34:01.380537 | orchestrator | 2025-09-03 00:34:01.380548 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-03 00:34:01.380559 | orchestrator | Wednesday 03 September 2025 00:33:53 +0000 (0:00:01.540) 0:00:12.002 *** 2025-09-03 00:34:01.380570 | orchestrator | changed: [testbed-manager] 2025-09-03 00:34:01.380581 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:34:01.380592 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:34:01.380603 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:34:01.380614 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:34:01.380625 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:34:01.380653 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:34:01.380664 | orchestrator | 2025-09-03 00:34:01.380675 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-03 00:34:01.380687 | orchestrator | Wednesday 03 September 2025 00:33:54 +0000 (0:00:01.511) 0:00:13.514 *** 2025-09-03 00:34:01.380698 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:34:01.380708 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:34:01.380719 | orchestrator | ok: [testbed-manager] 2025-09-03 00:34:01.380730 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:34:01.380741 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:34:01.380759 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:34:01.380770 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:34:01.380781 | orchestrator | 2025-09-03 00:34:01.380797 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-03 00:34:01.380808 | orchestrator | Wednesday 03 September 2025 00:33:56 +0000 (0:00:01.425) 0:00:14.939 *** 2025-09-03 00:34:01.380819 | orchestrator | changed: [testbed-manager] 2025-09-03 00:34:01.380830 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:34:01.380841 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:34:01.380852 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:34:01.380863 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:34:01.380873 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:34:01.380884 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:34:01.380895 | orchestrator | 2025-09-03 00:34:01.380906 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-03 00:34:01.380917 | orchestrator | Wednesday 03 September 2025 00:33:58 +0000 (0:00:01.762) 0:00:16.702 *** 2025-09-03 00:34:01.380927 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:34:01.380938 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:34:01.380949 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:34:01.380960 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:34:01.380970 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:34:01.380981 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:34:01.380992 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:34:01.381002 | orchestrator | 2025-09-03 00:34:01.381013 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-03 00:34:01.381024 | orchestrator | 2025-09-03 00:34:01.381035 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-03 00:34:01.381046 | orchestrator | Wednesday 03 September 2025 00:33:58 +0000 (0:00:00.574) 0:00:17.277 *** 2025-09-03 00:34:01.381057 | orchestrator | ok: [testbed-manager] 2025-09-03 00:34:01.381068 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:34:01.381079 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:34:01.381090 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:34:01.381101 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:34:01.381111 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:34:01.381122 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:34:01.381133 | orchestrator | 2025-09-03 00:34:01.381143 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:34:01.381155 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-03 00:34:01.381169 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:34:01.381180 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:34:01.381190 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:34:01.381201 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:34:01.381212 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:34:01.381223 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:34:01.381234 | orchestrator | 2025-09-03 00:34:01.381245 | orchestrator | 2025-09-03 00:34:01.381256 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:34:01.381267 | orchestrator | Wednesday 03 September 2025 00:34:01 +0000 (0:00:02.649) 0:00:19.927 *** 2025-09-03 00:34:01.381284 | orchestrator | =============================================================================== 2025-09-03 00:34:01.381295 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.82s 2025-09-03 00:34:01.381306 | orchestrator | Install python3-docker -------------------------------------------------- 2.65s 2025-09-03 00:34:01.381317 | orchestrator | Apply netplan configuration --------------------------------------------- 2.06s 2025-09-03 00:34:01.381328 | orchestrator | Apply netplan configuration --------------------------------------------- 1.79s 2025-09-03 00:34:01.381338 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.76s 2025-09-03 00:34:01.381349 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.54s 2025-09-03 00:34:01.381360 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.51s 2025-09-03 00:34:01.381390 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.43s 2025-09-03 00:34:01.381401 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.41s 2025-09-03 00:34:01.381412 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.65s 2025-09-03 00:34:01.381423 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.58s 2025-09-03 00:34:01.381440 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.57s 2025-09-03 00:34:01.964661 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-03 00:34:13.878481 | orchestrator | 2025-09-03 00:34:13 | INFO  | Task 84a9757f-0f26-4281-bc43-88ea285aedd1 (reboot) was prepared for execution. 2025-09-03 00:34:13.878649 | orchestrator | 2025-09-03 00:34:13 | INFO  | It takes a moment until task 84a9757f-0f26-4281-bc43-88ea285aedd1 (reboot) has been started and output is visible here. 2025-09-03 00:34:22.906442 | orchestrator | 2025-09-03 00:34:22.906571 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-03 00:34:22.906581 | orchestrator | 2025-09-03 00:34:22.906589 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-03 00:34:22.906597 | orchestrator | Wednesday 03 September 2025 00:34:17 +0000 (0:00:00.153) 0:00:00.153 *** 2025-09-03 00:34:22.906604 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:34:22.906611 | orchestrator | 2025-09-03 00:34:22.906617 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-03 00:34:22.906624 | orchestrator | Wednesday 03 September 2025 00:34:17 +0000 (0:00:00.076) 0:00:00.230 *** 2025-09-03 00:34:22.906631 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:34:22.906637 | orchestrator | 2025-09-03 00:34:22.906643 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-03 00:34:22.906649 | orchestrator | Wednesday 03 September 2025 00:34:18 +0000 (0:00:00.891) 0:00:01.121 *** 2025-09-03 00:34:22.906655 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:34:22.906661 | orchestrator | 2025-09-03 00:34:22.906668 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-03 00:34:22.906674 | orchestrator | 2025-09-03 00:34:22.906681 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-03 00:34:22.906687 | orchestrator | Wednesday 03 September 2025 00:34:18 +0000 (0:00:00.108) 0:00:01.230 *** 2025-09-03 00:34:22.906693 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:34:22.906699 | orchestrator | 2025-09-03 00:34:22.906705 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-03 00:34:22.906711 | orchestrator | Wednesday 03 September 2025 00:34:18 +0000 (0:00:00.079) 0:00:01.309 *** 2025-09-03 00:34:22.906717 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:34:22.906724 | orchestrator | 2025-09-03 00:34:22.906730 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-03 00:34:22.906736 | orchestrator | Wednesday 03 September 2025 00:34:19 +0000 (0:00:00.617) 0:00:01.926 *** 2025-09-03 00:34:22.906742 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:34:22.906776 | orchestrator | 2025-09-03 00:34:22.906783 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-03 00:34:22.906789 | orchestrator | 2025-09-03 00:34:22.906795 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-03 00:34:22.906801 | orchestrator | Wednesday 03 September 2025 00:34:19 +0000 (0:00:00.102) 0:00:02.029 *** 2025-09-03 00:34:22.906807 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:34:22.906813 | orchestrator | 2025-09-03 00:34:22.906819 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-03 00:34:22.906825 | orchestrator | Wednesday 03 September 2025 00:34:19 +0000 (0:00:00.170) 0:00:02.199 *** 2025-09-03 00:34:22.906831 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:34:22.906838 | orchestrator | 2025-09-03 00:34:22.906844 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-03 00:34:22.906850 | orchestrator | Wednesday 03 September 2025 00:34:20 +0000 (0:00:00.633) 0:00:02.832 *** 2025-09-03 00:34:22.906856 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:34:22.906862 | orchestrator | 2025-09-03 00:34:22.906868 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-03 00:34:22.906874 | orchestrator | 2025-09-03 00:34:22.906880 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-03 00:34:22.906886 | orchestrator | Wednesday 03 September 2025 00:34:20 +0000 (0:00:00.094) 0:00:02.927 *** 2025-09-03 00:34:22.906893 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:34:22.906899 | orchestrator | 2025-09-03 00:34:22.906905 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-03 00:34:22.906911 | orchestrator | Wednesday 03 September 2025 00:34:20 +0000 (0:00:00.081) 0:00:03.008 *** 2025-09-03 00:34:22.906917 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:34:22.906923 | orchestrator | 2025-09-03 00:34:22.906929 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-03 00:34:22.906936 | orchestrator | Wednesday 03 September 2025 00:34:20 +0000 (0:00:00.656) 0:00:03.665 *** 2025-09-03 00:34:22.906944 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:34:22.906951 | orchestrator | 2025-09-03 00:34:22.906959 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-03 00:34:22.906966 | orchestrator | 2025-09-03 00:34:22.906973 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-03 00:34:22.906980 | orchestrator | Wednesday 03 September 2025 00:34:20 +0000 (0:00:00.097) 0:00:03.762 *** 2025-09-03 00:34:22.906987 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:34:22.906994 | orchestrator | 2025-09-03 00:34:22.907001 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-03 00:34:22.907009 | orchestrator | Wednesday 03 September 2025 00:34:21 +0000 (0:00:00.084) 0:00:03.846 *** 2025-09-03 00:34:22.907016 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:34:22.907023 | orchestrator | 2025-09-03 00:34:22.907031 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-03 00:34:22.907038 | orchestrator | Wednesday 03 September 2025 00:34:21 +0000 (0:00:00.650) 0:00:04.497 *** 2025-09-03 00:34:22.907045 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:34:22.907052 | orchestrator | 2025-09-03 00:34:22.907059 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-03 00:34:22.907066 | orchestrator | 2025-09-03 00:34:22.907074 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-03 00:34:22.907081 | orchestrator | Wednesday 03 September 2025 00:34:21 +0000 (0:00:00.088) 0:00:04.586 *** 2025-09-03 00:34:22.907088 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:34:22.907095 | orchestrator | 2025-09-03 00:34:22.907102 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-03 00:34:22.907109 | orchestrator | Wednesday 03 September 2025 00:34:21 +0000 (0:00:00.095) 0:00:04.681 *** 2025-09-03 00:34:22.907116 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:34:22.907124 | orchestrator | 2025-09-03 00:34:22.907131 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-03 00:34:22.907144 | orchestrator | Wednesday 03 September 2025 00:34:22 +0000 (0:00:00.666) 0:00:05.348 *** 2025-09-03 00:34:22.907181 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:34:22.907189 | orchestrator | 2025-09-03 00:34:22.907196 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:34:22.907205 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:34:22.907214 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:34:22.907221 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:34:22.907229 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:34:22.907236 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:34:22.907244 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:34:22.907251 | orchestrator | 2025-09-03 00:34:22.907258 | orchestrator | 2025-09-03 00:34:22.907265 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:34:22.907273 | orchestrator | Wednesday 03 September 2025 00:34:22 +0000 (0:00:00.037) 0:00:05.385 *** 2025-09-03 00:34:22.907280 | orchestrator | =============================================================================== 2025-09-03 00:34:22.907288 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.12s 2025-09-03 00:34:22.907299 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.59s 2025-09-03 00:34:22.907306 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.53s 2025-09-03 00:34:23.158543 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-03 00:34:35.069949 | orchestrator | 2025-09-03 00:34:35 | INFO  | Task 41dba365-a592-48d3-a81d-014de7d1ddcf (wait-for-connection) was prepared for execution. 2025-09-03 00:34:35.070147 | orchestrator | 2025-09-03 00:34:35 | INFO  | It takes a moment until task 41dba365-a592-48d3-a81d-014de7d1ddcf (wait-for-connection) has been started and output is visible here. 2025-09-03 00:34:50.585049 | orchestrator | 2025-09-03 00:34:50.585183 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-03 00:34:50.585202 | orchestrator | 2025-09-03 00:34:50.585214 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-03 00:34:50.585226 | orchestrator | Wednesday 03 September 2025 00:34:38 +0000 (0:00:00.173) 0:00:00.173 *** 2025-09-03 00:34:50.585238 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:34:50.585250 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:34:50.585262 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:34:50.585272 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:34:50.585283 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:34:50.585294 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:34:50.585305 | orchestrator | 2025-09-03 00:34:50.585317 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:34:50.585383 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:34:50.585398 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:34:50.585409 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:34:50.585448 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:34:50.585460 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:34:50.585471 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:34:50.585482 | orchestrator | 2025-09-03 00:34:50.585493 | orchestrator | 2025-09-03 00:34:50.585504 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:34:50.585515 | orchestrator | Wednesday 03 September 2025 00:34:50 +0000 (0:00:11.446) 0:00:11.620 *** 2025-09-03 00:34:50.585526 | orchestrator | =============================================================================== 2025-09-03 00:34:50.585537 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.45s 2025-09-03 00:34:50.843460 | orchestrator | + osism apply hddtemp 2025-09-03 00:35:02.800808 | orchestrator | 2025-09-03 00:35:02 | INFO  | Task 02334cbb-8938-407a-946f-19d41ae16128 (hddtemp) was prepared for execution. 2025-09-03 00:35:02.800951 | orchestrator | 2025-09-03 00:35:02 | INFO  | It takes a moment until task 02334cbb-8938-407a-946f-19d41ae16128 (hddtemp) has been started and output is visible here. 2025-09-03 00:35:29.010869 | orchestrator | 2025-09-03 00:35:29.011018 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-03 00:35:29.011035 | orchestrator | 2025-09-03 00:35:29.011049 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-03 00:35:29.011061 | orchestrator | Wednesday 03 September 2025 00:35:06 +0000 (0:00:00.252) 0:00:00.252 *** 2025-09-03 00:35:29.011073 | orchestrator | ok: [testbed-manager] 2025-09-03 00:35:29.011086 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:35:29.011097 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:35:29.011108 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:35:29.011119 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:35:29.011130 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:35:29.011141 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:35:29.011152 | orchestrator | 2025-09-03 00:35:29.011163 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-03 00:35:29.011174 | orchestrator | Wednesday 03 September 2025 00:35:07 +0000 (0:00:00.564) 0:00:00.816 *** 2025-09-03 00:35:29.011211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:35:29.011226 | orchestrator | 2025-09-03 00:35:29.011237 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-03 00:35:29.011249 | orchestrator | Wednesday 03 September 2025 00:35:08 +0000 (0:00:00.855) 0:00:01.672 *** 2025-09-03 00:35:29.011268 | orchestrator | ok: [testbed-manager] 2025-09-03 00:35:29.011287 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:35:29.011336 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:35:29.011354 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:35:29.011373 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:35:29.011391 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:35:29.011409 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:35:29.011429 | orchestrator | 2025-09-03 00:35:29.011449 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-03 00:35:29.011470 | orchestrator | Wednesday 03 September 2025 00:35:10 +0000 (0:00:01.870) 0:00:03.542 *** 2025-09-03 00:35:29.011484 | orchestrator | changed: [testbed-manager] 2025-09-03 00:35:29.011499 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:35:29.011512 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:35:29.011524 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:35:29.011537 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:35:29.011578 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:35:29.011590 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:35:29.011603 | orchestrator | 2025-09-03 00:35:29.011615 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-03 00:35:29.011628 | orchestrator | Wednesday 03 September 2025 00:35:10 +0000 (0:00:00.960) 0:00:04.502 *** 2025-09-03 00:35:29.011641 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:35:29.011654 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:35:29.011667 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:35:29.011679 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:35:29.011692 | orchestrator | ok: [testbed-manager] 2025-09-03 00:35:29.011706 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:35:29.011718 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:35:29.011730 | orchestrator | 2025-09-03 00:35:29.011741 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-03 00:35:29.011752 | orchestrator | Wednesday 03 September 2025 00:35:12 +0000 (0:00:01.064) 0:00:05.566 *** 2025-09-03 00:35:29.011763 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:35:29.011774 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:35:29.011787 | orchestrator | changed: [testbed-manager] 2025-09-03 00:35:29.011807 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:35:29.011825 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:35:29.011843 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:35:29.011860 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:35:29.011878 | orchestrator | 2025-09-03 00:35:29.011897 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-03 00:35:29.011916 | orchestrator | Wednesday 03 September 2025 00:35:12 +0000 (0:00:00.725) 0:00:06.292 *** 2025-09-03 00:35:29.011932 | orchestrator | changed: [testbed-manager] 2025-09-03 00:35:29.011944 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:35:29.011954 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:35:29.011965 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:35:29.011976 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:35:29.011986 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:35:29.011997 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:35:29.012007 | orchestrator | 2025-09-03 00:35:29.012018 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-03 00:35:29.012029 | orchestrator | Wednesday 03 September 2025 00:35:25 +0000 (0:00:12.661) 0:00:18.953 *** 2025-09-03 00:35:29.012040 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:35:29.012052 | orchestrator | 2025-09-03 00:35:29.012063 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-03 00:35:29.012074 | orchestrator | Wednesday 03 September 2025 00:35:26 +0000 (0:00:01.409) 0:00:20.363 *** 2025-09-03 00:35:29.012084 | orchestrator | changed: [testbed-manager] 2025-09-03 00:35:29.012095 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:35:29.012105 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:35:29.012116 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:35:29.012126 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:35:29.012137 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:35:29.012147 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:35:29.012158 | orchestrator | 2025-09-03 00:35:29.012168 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:35:29.012180 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:35:29.012213 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-03 00:35:29.012234 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-03 00:35:29.012256 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-03 00:35:29.012267 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-03 00:35:29.012278 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-03 00:35:29.012288 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-03 00:35:29.012332 | orchestrator | 2025-09-03 00:35:29.012343 | orchestrator | 2025-09-03 00:35:29.012354 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:35:29.012365 | orchestrator | Wednesday 03 September 2025 00:35:28 +0000 (0:00:01.837) 0:00:22.200 *** 2025-09-03 00:35:29.012376 | orchestrator | =============================================================================== 2025-09-03 00:35:29.012387 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.66s 2025-09-03 00:35:29.012397 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.87s 2025-09-03 00:35:29.012408 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.84s 2025-09-03 00:35:29.012419 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.41s 2025-09-03 00:35:29.012429 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.06s 2025-09-03 00:35:29.012440 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.96s 2025-09-03 00:35:29.012451 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 0.86s 2025-09-03 00:35:29.012461 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.73s 2025-09-03 00:35:29.012472 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.56s 2025-09-03 00:35:29.258405 | orchestrator | ++ semver latest 7.1.1 2025-09-03 00:35:29.306798 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-03 00:35:29.306862 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-03 00:35:29.306876 | orchestrator | + sudo systemctl restart manager.service 2025-09-03 00:35:42.726387 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-03 00:35:42.726533 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-03 00:35:42.726549 | orchestrator | + local max_attempts=60 2025-09-03 00:35:42.726563 | orchestrator | + local name=ceph-ansible 2025-09-03 00:35:42.726574 | orchestrator | + local attempt_num=1 2025-09-03 00:35:42.726586 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-03 00:35:42.753175 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-03 00:35:42.753202 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-03 00:35:42.754012 | orchestrator | + sleep 5 2025-09-03 00:35:47.758121 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-03 00:35:47.806084 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-03 00:35:47.806124 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-03 00:35:47.806136 | orchestrator | + sleep 5 2025-09-03 00:35:52.809881 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-03 00:35:52.839923 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-03 00:35:52.839982 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-03 00:35:52.839996 | orchestrator | + sleep 5 2025-09-03 00:35:57.844630 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-03 00:35:57.876858 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-03 00:35:57.876961 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-03 00:35:57.876991 | orchestrator | + sleep 5 2025-09-03 00:36:02.880862 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-03 00:36:02.916672 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-03 00:36:02.916753 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-03 00:36:02.916797 | orchestrator | + sleep 5 2025-09-03 00:36:07.920378 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-03 00:36:07.962862 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-03 00:36:07.962915 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-03 00:36:07.962929 | orchestrator | + sleep 5 2025-09-03 00:36:12.967691 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-03 00:36:13.008971 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-03 00:36:13.009040 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-03 00:36:13.009054 | orchestrator | + sleep 5 2025-09-03 00:36:18.015251 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-03 00:36:18.055376 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-03 00:36:18.055476 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-03 00:36:18.055492 | orchestrator | + sleep 5 2025-09-03 00:36:23.058654 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-03 00:36:23.076645 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-03 00:36:23.076688 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-03 00:36:23.076702 | orchestrator | + sleep 5 2025-09-03 00:36:28.080293 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-03 00:36:28.116990 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-03 00:36:28.117067 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-03 00:36:28.117082 | orchestrator | + sleep 5 2025-09-03 00:36:33.121048 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-03 00:36:33.160773 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-03 00:36:33.160836 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-03 00:36:33.160850 | orchestrator | + sleep 5 2025-09-03 00:36:38.166553 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-03 00:36:38.204570 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-03 00:36:38.204665 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-03 00:36:38.204682 | orchestrator | + sleep 5 2025-09-03 00:36:43.209575 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-03 00:36:43.247146 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-03 00:36:43.247227 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-03 00:36:43.247269 | orchestrator | + sleep 5 2025-09-03 00:36:48.251720 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-03 00:36:48.288798 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-03 00:36:48.288868 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-03 00:36:48.288883 | orchestrator | + local max_attempts=60 2025-09-03 00:36:48.289332 | orchestrator | + local name=kolla-ansible 2025-09-03 00:36:48.289355 | orchestrator | + local attempt_num=1 2025-09-03 00:36:48.289367 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-03 00:36:48.317344 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-03 00:36:48.317395 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-03 00:36:48.317403 | orchestrator | + local max_attempts=60 2025-09-03 00:36:48.317410 | orchestrator | + local name=osism-ansible 2025-09-03 00:36:48.317417 | orchestrator | + local attempt_num=1 2025-09-03 00:36:48.318339 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-03 00:36:48.355676 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-03 00:36:48.355717 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-03 00:36:48.355728 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-03 00:36:48.520990 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-03 00:36:48.695802 | orchestrator | ARA in kolla-ansible already disabled. 2025-09-03 00:36:48.838874 | orchestrator | ARA in osism-ansible already disabled. 2025-09-03 00:36:48.961581 | orchestrator | ARA in osism-kubernetes already disabled. 2025-09-03 00:36:48.961666 | orchestrator | + osism apply gather-facts 2025-09-03 00:37:00.941569 | orchestrator | 2025-09-03 00:37:00 | INFO  | Task dd5656a9-2633-49fe-9362-64e296e562bc (gather-facts) was prepared for execution. 2025-09-03 00:37:00.941658 | orchestrator | 2025-09-03 00:37:00 | INFO  | It takes a moment until task dd5656a9-2633-49fe-9362-64e296e562bc (gather-facts) has been started and output is visible here. 2025-09-03 00:37:13.734399 | orchestrator | 2025-09-03 00:37:13.734550 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-03 00:37:13.734606 | orchestrator | 2025-09-03 00:37:13.734625 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-03 00:37:13.734642 | orchestrator | Wednesday 03 September 2025 00:37:04 +0000 (0:00:00.197) 0:00:00.197 *** 2025-09-03 00:37:13.734661 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:37:13.734681 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:37:13.734699 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:37:13.734718 | orchestrator | ok: [testbed-manager] 2025-09-03 00:37:13.734736 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:37:13.734754 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:37:13.734773 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:37:13.734792 | orchestrator | 2025-09-03 00:37:13.734810 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-03 00:37:13.734829 | orchestrator | 2025-09-03 00:37:13.734848 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-03 00:37:13.734867 | orchestrator | Wednesday 03 September 2025 00:37:12 +0000 (0:00:08.361) 0:00:08.559 *** 2025-09-03 00:37:13.734887 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:37:13.734906 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:37:13.734925 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:37:13.734943 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:37:13.734961 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:37:13.734980 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:37:13.734999 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:37:13.735017 | orchestrator | 2025-09-03 00:37:13.735035 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:37:13.735054 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-03 00:37:13.735072 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-03 00:37:13.735090 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-03 00:37:13.735107 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-03 00:37:13.735124 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-03 00:37:13.735142 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-03 00:37:13.735159 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-03 00:37:13.735177 | orchestrator | 2025-09-03 00:37:13.735195 | orchestrator | 2025-09-03 00:37:13.735213 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:37:13.735262 | orchestrator | Wednesday 03 September 2025 00:37:13 +0000 (0:00:00.488) 0:00:09.047 *** 2025-09-03 00:37:13.735279 | orchestrator | =============================================================================== 2025-09-03 00:37:13.735296 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.36s 2025-09-03 00:37:13.735313 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2025-09-03 00:37:14.102307 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-03 00:37:14.119361 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-03 00:37:14.138444 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-03 00:37:14.155964 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-03 00:37:14.169125 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-03 00:37:14.180459 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-03 00:37:14.196211 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-03 00:37:14.206367 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-03 00:37:14.219396 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-03 00:37:14.236803 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-03 00:37:14.249409 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-03 00:37:14.261419 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-03 00:37:14.274368 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-03 00:37:14.285783 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-03 00:37:14.297518 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-03 00:37:14.309144 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-03 00:37:14.320537 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-03 00:37:14.331913 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-03 00:37:14.346971 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-03 00:37:14.358481 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-03 00:37:14.374586 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-03 00:37:14.481867 | orchestrator | ok: Runtime: 0:23:28.810213 2025-09-03 00:37:14.580937 | 2025-09-03 00:37:14.581078 | TASK [Deploy services] 2025-09-03 00:37:15.112716 | orchestrator | skipping: Conditional result was False 2025-09-03 00:37:15.122074 | 2025-09-03 00:37:15.122197 | TASK [Deploy in a nutshell] 2025-09-03 00:37:15.812650 | orchestrator | + set -e 2025-09-03 00:37:15.812777 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-03 00:37:15.812787 | orchestrator | ++ export INTERACTIVE=false 2025-09-03 00:37:15.812797 | orchestrator | ++ INTERACTIVE=false 2025-09-03 00:37:15.812802 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-03 00:37:15.812815 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-03 00:37:15.812822 | orchestrator | + source /opt/manager-vars.sh 2025-09-03 00:37:15.812844 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-03 00:37:15.812856 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-03 00:37:15.812861 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-03 00:37:15.812868 | orchestrator | ++ CEPH_VERSION=reef 2025-09-03 00:37:15.812872 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-03 00:37:15.812880 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-03 00:37:15.812884 | orchestrator | 2025-09-03 00:37:15.812888 | orchestrator | # PULL IMAGES 2025-09-03 00:37:15.812892 | orchestrator | 2025-09-03 00:37:15.812896 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-03 00:37:15.812902 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-03 00:37:15.812906 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-03 00:37:15.812911 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-03 00:37:15.812915 | orchestrator | ++ export ARA=false 2025-09-03 00:37:15.812918 | orchestrator | ++ ARA=false 2025-09-03 00:37:15.812922 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-03 00:37:15.812926 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-03 00:37:15.812930 | orchestrator | ++ export TEMPEST=true 2025-09-03 00:37:15.812934 | orchestrator | ++ TEMPEST=true 2025-09-03 00:37:15.812938 | orchestrator | ++ export IS_ZUUL=true 2025-09-03 00:37:15.812942 | orchestrator | ++ IS_ZUUL=true 2025-09-03 00:37:15.812946 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.254 2025-09-03 00:37:15.812950 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.254 2025-09-03 00:37:15.812954 | orchestrator | ++ export EXTERNAL_API=false 2025-09-03 00:37:15.812957 | orchestrator | ++ EXTERNAL_API=false 2025-09-03 00:37:15.812961 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-03 00:37:15.812965 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-03 00:37:15.812969 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-03 00:37:15.812973 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-03 00:37:15.812977 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-03 00:37:15.812981 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-03 00:37:15.812985 | orchestrator | + echo 2025-09-03 00:37:15.812989 | orchestrator | + echo '# PULL IMAGES' 2025-09-03 00:37:15.812993 | orchestrator | + echo 2025-09-03 00:37:15.813381 | orchestrator | ++ semver latest 7.0.0 2025-09-03 00:37:15.858373 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-03 00:37:15.858438 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-03 00:37:15.858446 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-03 00:37:17.684826 | orchestrator | 2025-09-03 00:37:17 | INFO  | Trying to run play pull-images in environment custom 2025-09-03 00:37:27.750606 | orchestrator | 2025-09-03 00:37:27 | INFO  | Task 80b759a9-9ae6-4b79-b2a8-3fcbaf49fda7 (pull-images) was prepared for execution. 2025-09-03 00:37:27.750742 | orchestrator | 2025-09-03 00:37:27 | INFO  | Task 80b759a9-9ae6-4b79-b2a8-3fcbaf49fda7 is running in background. No more output. Check ARA for logs. 2025-09-03 00:37:29.906750 | orchestrator | 2025-09-03 00:37:29 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-03 00:37:40.044347 | orchestrator | 2025-09-03 00:37:40 | INFO  | Task d1f33697-b266-404c-b46b-e7083ab8648d (wipe-partitions) was prepared for execution. 2025-09-03 00:37:40.044465 | orchestrator | 2025-09-03 00:37:40 | INFO  | It takes a moment until task d1f33697-b266-404c-b46b-e7083ab8648d (wipe-partitions) has been started and output is visible here. 2025-09-03 00:37:54.145155 | orchestrator | 2025-09-03 00:37:54.145322 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-03 00:37:54.145339 | orchestrator | 2025-09-03 00:37:54.145350 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-03 00:37:54.145368 | orchestrator | Wednesday 03 September 2025 00:37:44 +0000 (0:00:00.140) 0:00:00.140 *** 2025-09-03 00:37:54.145379 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:37:54.145391 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:37:54.145402 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:37:54.145412 | orchestrator | 2025-09-03 00:37:54.145423 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-03 00:37:54.145457 | orchestrator | Wednesday 03 September 2025 00:37:44 +0000 (0:00:00.584) 0:00:00.725 *** 2025-09-03 00:37:54.145468 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:37:54.145478 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:37:54.145491 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:37:54.145501 | orchestrator | 2025-09-03 00:37:54.145511 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-03 00:37:54.145521 | orchestrator | Wednesday 03 September 2025 00:37:44 +0000 (0:00:00.243) 0:00:00.968 *** 2025-09-03 00:37:54.145530 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:37:54.145541 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:37:54.145551 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:37:54.145560 | orchestrator | 2025-09-03 00:37:54.145570 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-03 00:37:54.145580 | orchestrator | Wednesday 03 September 2025 00:37:45 +0000 (0:00:00.823) 0:00:01.792 *** 2025-09-03 00:37:54.145590 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:37:54.145599 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:37:54.145609 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:37:54.145618 | orchestrator | 2025-09-03 00:37:54.145628 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-03 00:37:54.145637 | orchestrator | Wednesday 03 September 2025 00:37:45 +0000 (0:00:00.258) 0:00:02.051 *** 2025-09-03 00:37:54.145647 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-03 00:37:54.145662 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-03 00:37:54.145680 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-03 00:37:54.145698 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-03 00:37:54.145714 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-03 00:37:54.145730 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-03 00:37:54.145747 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-03 00:37:54.145767 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-03 00:37:54.145786 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-03 00:37:54.145805 | orchestrator | 2025-09-03 00:37:54.145825 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-03 00:37:54.145850 | orchestrator | Wednesday 03 September 2025 00:37:48 +0000 (0:00:02.110) 0:00:04.161 *** 2025-09-03 00:37:54.145875 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-03 00:37:54.145897 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-03 00:37:54.145920 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-03 00:37:54.145944 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-03 00:37:54.145966 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-03 00:37:54.145984 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-03 00:37:54.146001 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-03 00:37:54.146090 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-03 00:37:54.146108 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-03 00:37:54.146123 | orchestrator | 2025-09-03 00:37:54.146140 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-03 00:37:54.146157 | orchestrator | Wednesday 03 September 2025 00:37:49 +0000 (0:00:01.341) 0:00:05.502 *** 2025-09-03 00:37:54.146173 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-03 00:37:54.146189 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-03 00:37:54.146234 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-03 00:37:54.146252 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-03 00:37:54.146268 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-03 00:37:54.146285 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-03 00:37:54.146301 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-03 00:37:54.146330 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-03 00:37:54.146350 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-03 00:37:54.146360 | orchestrator | 2025-09-03 00:37:54.146370 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-03 00:37:54.146380 | orchestrator | Wednesday 03 September 2025 00:37:52 +0000 (0:00:03.241) 0:00:08.744 *** 2025-09-03 00:37:54.146390 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:37:54.146400 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:37:54.146409 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:37:54.146419 | orchestrator | 2025-09-03 00:37:54.146429 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-03 00:37:54.146438 | orchestrator | Wednesday 03 September 2025 00:37:53 +0000 (0:00:00.585) 0:00:09.330 *** 2025-09-03 00:37:54.146448 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:37:54.146458 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:37:54.146467 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:37:54.146477 | orchestrator | 2025-09-03 00:37:54.146487 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:37:54.146500 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:37:54.146511 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:37:54.146541 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:37:54.146551 | orchestrator | 2025-09-03 00:37:54.146561 | orchestrator | 2025-09-03 00:37:54.146571 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:37:54.146581 | orchestrator | Wednesday 03 September 2025 00:37:53 +0000 (0:00:00.600) 0:00:09.931 *** 2025-09-03 00:37:54.146591 | orchestrator | =============================================================================== 2025-09-03 00:37:54.146600 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 3.24s 2025-09-03 00:37:54.146610 | orchestrator | Check device availability ----------------------------------------------- 2.11s 2025-09-03 00:37:54.146620 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.34s 2025-09-03 00:37:54.146629 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.82s 2025-09-03 00:37:54.146639 | orchestrator | Request device events from the kernel ----------------------------------- 0.60s 2025-09-03 00:37:54.146649 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2025-09-03 00:37:54.146659 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.58s 2025-09-03 00:37:54.146668 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2025-09-03 00:37:54.146678 | orchestrator | Remove all rook related logical devices --------------------------------- 0.24s 2025-09-03 00:38:06.401893 | orchestrator | 2025-09-03 00:38:06 | INFO  | Task 31b38443-514b-47a2-a791-782e8b31e449 (facts) was prepared for execution. 2025-09-03 00:38:06.402127 | orchestrator | 2025-09-03 00:38:06 | INFO  | It takes a moment until task 31b38443-514b-47a2-a791-782e8b31e449 (facts) has been started and output is visible here. 2025-09-03 00:38:19.303393 | orchestrator | 2025-09-03 00:38:19.303524 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-03 00:38:19.303544 | orchestrator | 2025-09-03 00:38:19.303557 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-03 00:38:19.303569 | orchestrator | Wednesday 03 September 2025 00:38:10 +0000 (0:00:00.264) 0:00:00.264 *** 2025-09-03 00:38:19.303581 | orchestrator | ok: [testbed-manager] 2025-09-03 00:38:19.303595 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:38:19.303607 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:38:19.303646 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:38:19.303658 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:38:19.303669 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:38:19.303680 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:38:19.303692 | orchestrator | 2025-09-03 00:38:19.303703 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-03 00:38:19.303714 | orchestrator | Wednesday 03 September 2025 00:38:11 +0000 (0:00:01.093) 0:00:01.358 *** 2025-09-03 00:38:19.303725 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:38:19.303737 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:38:19.303748 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:38:19.303759 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:38:19.303771 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:19.303782 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:19.303793 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:38:19.303804 | orchestrator | 2025-09-03 00:38:19.303815 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-03 00:38:19.303826 | orchestrator | 2025-09-03 00:38:19.303854 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-03 00:38:19.303866 | orchestrator | Wednesday 03 September 2025 00:38:12 +0000 (0:00:01.251) 0:00:02.610 *** 2025-09-03 00:38:19.303877 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:38:19.303888 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:38:19.303900 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:38:19.303911 | orchestrator | ok: [testbed-manager] 2025-09-03 00:38:19.303923 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:38:19.303936 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:38:19.303949 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:38:19.303962 | orchestrator | 2025-09-03 00:38:19.303976 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-03 00:38:19.303989 | orchestrator | 2025-09-03 00:38:19.304002 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-03 00:38:19.304016 | orchestrator | Wednesday 03 September 2025 00:38:18 +0000 (0:00:05.531) 0:00:08.141 *** 2025-09-03 00:38:19.304028 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:38:19.304043 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:38:19.304056 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:38:19.304069 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:38:19.304082 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:19.304095 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:19.304108 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:38:19.304120 | orchestrator | 2025-09-03 00:38:19.304132 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:38:19.304146 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:38:19.304160 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:38:19.304174 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:38:19.304212 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:38:19.304226 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:38:19.304239 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:38:19.304252 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:38:19.304266 | orchestrator | 2025-09-03 00:38:19.304289 | orchestrator | 2025-09-03 00:38:19.304300 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:38:19.304311 | orchestrator | Wednesday 03 September 2025 00:38:18 +0000 (0:00:00.697) 0:00:08.839 *** 2025-09-03 00:38:19.304322 | orchestrator | =============================================================================== 2025-09-03 00:38:19.304333 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.53s 2025-09-03 00:38:19.304344 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.25s 2025-09-03 00:38:19.304355 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.09s 2025-09-03 00:38:19.304366 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.70s 2025-09-03 00:38:21.607161 | orchestrator | 2025-09-03 00:38:21 | INFO  | Task 4216417e-da9a-4407-bfdc-0af441842406 (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-03 00:38:21.607345 | orchestrator | 2025-09-03 00:38:21 | INFO  | It takes a moment until task 4216417e-da9a-4407-bfdc-0af441842406 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-03 00:38:33.025395 | orchestrator | 2025-09-03 00:38:33.025526 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-03 00:38:33.025544 | orchestrator | 2025-09-03 00:38:33.025556 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-03 00:38:33.025568 | orchestrator | Wednesday 03 September 2025 00:38:25 +0000 (0:00:00.327) 0:00:00.327 *** 2025-09-03 00:38:33.025581 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-03 00:38:33.025593 | orchestrator | 2025-09-03 00:38:33.025604 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-03 00:38:33.025615 | orchestrator | Wednesday 03 September 2025 00:38:25 +0000 (0:00:00.245) 0:00:00.572 *** 2025-09-03 00:38:33.025626 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:38:33.025640 | orchestrator | 2025-09-03 00:38:33.025651 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:33.025662 | orchestrator | Wednesday 03 September 2025 00:38:26 +0000 (0:00:00.223) 0:00:00.796 *** 2025-09-03 00:38:33.025673 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-03 00:38:33.025685 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-03 00:38:33.025696 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-03 00:38:33.025718 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-03 00:38:33.025729 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-03 00:38:33.025740 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-03 00:38:33.025751 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-03 00:38:33.025762 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-03 00:38:33.025773 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-03 00:38:33.025784 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-03 00:38:33.025795 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-03 00:38:33.025806 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-03 00:38:33.025817 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-03 00:38:33.025828 | orchestrator | 2025-09-03 00:38:33.025839 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:33.025850 | orchestrator | Wednesday 03 September 2025 00:38:26 +0000 (0:00:00.382) 0:00:01.178 *** 2025-09-03 00:38:33.025861 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:33.025896 | orchestrator | 2025-09-03 00:38:33.025910 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:33.025924 | orchestrator | Wednesday 03 September 2025 00:38:27 +0000 (0:00:00.506) 0:00:01.685 *** 2025-09-03 00:38:33.025937 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:33.025950 | orchestrator | 2025-09-03 00:38:33.025964 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:33.025977 | orchestrator | Wednesday 03 September 2025 00:38:27 +0000 (0:00:00.199) 0:00:01.884 *** 2025-09-03 00:38:33.025991 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:33.026003 | orchestrator | 2025-09-03 00:38:33.026072 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:33.026088 | orchestrator | Wednesday 03 September 2025 00:38:27 +0000 (0:00:00.201) 0:00:02.085 *** 2025-09-03 00:38:33.026101 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:33.026120 | orchestrator | 2025-09-03 00:38:33.026133 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:33.026147 | orchestrator | Wednesday 03 September 2025 00:38:27 +0000 (0:00:00.191) 0:00:02.277 *** 2025-09-03 00:38:33.026160 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:33.026203 | orchestrator | 2025-09-03 00:38:33.026218 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:33.026232 | orchestrator | Wednesday 03 September 2025 00:38:27 +0000 (0:00:00.205) 0:00:02.482 *** 2025-09-03 00:38:33.026245 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:33.026258 | orchestrator | 2025-09-03 00:38:33.026270 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:33.026280 | orchestrator | Wednesday 03 September 2025 00:38:28 +0000 (0:00:00.194) 0:00:02.676 *** 2025-09-03 00:38:33.026291 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:33.026302 | orchestrator | 2025-09-03 00:38:33.026313 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:33.026324 | orchestrator | Wednesday 03 September 2025 00:38:28 +0000 (0:00:00.195) 0:00:02.872 *** 2025-09-03 00:38:33.026335 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:33.026345 | orchestrator | 2025-09-03 00:38:33.026356 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:33.026367 | orchestrator | Wednesday 03 September 2025 00:38:28 +0000 (0:00:00.203) 0:00:03.075 *** 2025-09-03 00:38:33.026377 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77) 2025-09-03 00:38:33.026390 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77) 2025-09-03 00:38:33.026401 | orchestrator | 2025-09-03 00:38:33.026411 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:33.026422 | orchestrator | Wednesday 03 September 2025 00:38:28 +0000 (0:00:00.389) 0:00:03.464 *** 2025-09-03 00:38:33.026453 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9ba28649-84e7-4d30-a12b-e93c6e95fbcd) 2025-09-03 00:38:33.026465 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9ba28649-84e7-4d30-a12b-e93c6e95fbcd) 2025-09-03 00:38:33.026476 | orchestrator | 2025-09-03 00:38:33.026486 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:33.026497 | orchestrator | Wednesday 03 September 2025 00:38:29 +0000 (0:00:00.401) 0:00:03.866 *** 2025-09-03 00:38:33.026514 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7512b390-1fa3-4840-9943-7c6482fdb145) 2025-09-03 00:38:33.026526 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7512b390-1fa3-4840-9943-7c6482fdb145) 2025-09-03 00:38:33.026536 | orchestrator | 2025-09-03 00:38:33.026547 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:33.026558 | orchestrator | Wednesday 03 September 2025 00:38:29 +0000 (0:00:00.589) 0:00:04.455 *** 2025-09-03 00:38:33.026569 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e885087e-46ab-46e4-825b-bdcddcbfdff8) 2025-09-03 00:38:33.026589 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e885087e-46ab-46e4-825b-bdcddcbfdff8) 2025-09-03 00:38:33.026600 | orchestrator | 2025-09-03 00:38:33.026611 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:33.026622 | orchestrator | Wednesday 03 September 2025 00:38:30 +0000 (0:00:00.582) 0:00:05.038 *** 2025-09-03 00:38:33.026633 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-03 00:38:33.026644 | orchestrator | 2025-09-03 00:38:33.026655 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:33.026666 | orchestrator | Wednesday 03 September 2025 00:38:31 +0000 (0:00:00.697) 0:00:05.735 *** 2025-09-03 00:38:33.026676 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-03 00:38:33.026687 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-03 00:38:33.026698 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-03 00:38:33.026708 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-03 00:38:33.026719 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-03 00:38:33.026730 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-03 00:38:33.026741 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-03 00:38:33.026752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-03 00:38:33.026762 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-03 00:38:33.026773 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-03 00:38:33.026784 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-03 00:38:33.026795 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-03 00:38:33.026806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-03 00:38:33.026816 | orchestrator | 2025-09-03 00:38:33.026827 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:33.026838 | orchestrator | Wednesday 03 September 2025 00:38:31 +0000 (0:00:00.370) 0:00:06.106 *** 2025-09-03 00:38:33.026849 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:33.026860 | orchestrator | 2025-09-03 00:38:33.026871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:33.026882 | orchestrator | Wednesday 03 September 2025 00:38:31 +0000 (0:00:00.188) 0:00:06.294 *** 2025-09-03 00:38:33.026893 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:33.026903 | orchestrator | 2025-09-03 00:38:33.026914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:33.026925 | orchestrator | Wednesday 03 September 2025 00:38:31 +0000 (0:00:00.218) 0:00:06.512 *** 2025-09-03 00:38:33.026936 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:33.026946 | orchestrator | 2025-09-03 00:38:33.026957 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:33.026968 | orchestrator | Wednesday 03 September 2025 00:38:32 +0000 (0:00:00.182) 0:00:06.695 *** 2025-09-03 00:38:33.026979 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:33.026990 | orchestrator | 2025-09-03 00:38:33.027000 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:33.027011 | orchestrator | Wednesday 03 September 2025 00:38:32 +0000 (0:00:00.190) 0:00:06.886 *** 2025-09-03 00:38:33.027022 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:33.027033 | orchestrator | 2025-09-03 00:38:33.027050 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:33.027061 | orchestrator | Wednesday 03 September 2025 00:38:32 +0000 (0:00:00.200) 0:00:07.087 *** 2025-09-03 00:38:33.027072 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:33.027083 | orchestrator | 2025-09-03 00:38:33.027093 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:33.027104 | orchestrator | Wednesday 03 September 2025 00:38:32 +0000 (0:00:00.189) 0:00:07.276 *** 2025-09-03 00:38:33.027115 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:33.027126 | orchestrator | 2025-09-03 00:38:33.027137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:33.027148 | orchestrator | Wednesday 03 September 2025 00:38:32 +0000 (0:00:00.177) 0:00:07.454 *** 2025-09-03 00:38:33.027165 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:40.509996 | orchestrator | 2025-09-03 00:38:40.510230 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:40.510250 | orchestrator | Wednesday 03 September 2025 00:38:33 +0000 (0:00:00.188) 0:00:07.643 *** 2025-09-03 00:38:40.510263 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-03 00:38:40.510277 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-03 00:38:40.510289 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-03 00:38:40.510301 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-03 00:38:40.510312 | orchestrator | 2025-09-03 00:38:40.510323 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:40.510334 | orchestrator | Wednesday 03 September 2025 00:38:34 +0000 (0:00:00.999) 0:00:08.642 *** 2025-09-03 00:38:40.510367 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:40.510380 | orchestrator | 2025-09-03 00:38:40.510391 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:40.510402 | orchestrator | Wednesday 03 September 2025 00:38:34 +0000 (0:00:00.193) 0:00:08.836 *** 2025-09-03 00:38:40.510413 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:40.510424 | orchestrator | 2025-09-03 00:38:40.510435 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:40.510446 | orchestrator | Wednesday 03 September 2025 00:38:34 +0000 (0:00:00.187) 0:00:09.024 *** 2025-09-03 00:38:40.510457 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:40.510468 | orchestrator | 2025-09-03 00:38:40.510479 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:40.510490 | orchestrator | Wednesday 03 September 2025 00:38:34 +0000 (0:00:00.188) 0:00:09.212 *** 2025-09-03 00:38:40.510501 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:40.510511 | orchestrator | 2025-09-03 00:38:40.510523 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-03 00:38:40.510534 | orchestrator | Wednesday 03 September 2025 00:38:34 +0000 (0:00:00.212) 0:00:09.424 *** 2025-09-03 00:38:40.510545 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-03 00:38:40.510556 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-03 00:38:40.510567 | orchestrator | 2025-09-03 00:38:40.510578 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-03 00:38:40.510588 | orchestrator | Wednesday 03 September 2025 00:38:34 +0000 (0:00:00.159) 0:00:09.584 *** 2025-09-03 00:38:40.510599 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:40.510610 | orchestrator | 2025-09-03 00:38:40.510621 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-03 00:38:40.510632 | orchestrator | Wednesday 03 September 2025 00:38:35 +0000 (0:00:00.131) 0:00:09.716 *** 2025-09-03 00:38:40.510643 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:40.510654 | orchestrator | 2025-09-03 00:38:40.510665 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-03 00:38:40.510675 | orchestrator | Wednesday 03 September 2025 00:38:35 +0000 (0:00:00.136) 0:00:09.852 *** 2025-09-03 00:38:40.510686 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:40.510727 | orchestrator | 2025-09-03 00:38:40.510739 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-03 00:38:40.510750 | orchestrator | Wednesday 03 September 2025 00:38:35 +0000 (0:00:00.130) 0:00:09.982 *** 2025-09-03 00:38:40.510761 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:38:40.510773 | orchestrator | 2025-09-03 00:38:40.510784 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-03 00:38:40.510795 | orchestrator | Wednesday 03 September 2025 00:38:35 +0000 (0:00:00.143) 0:00:10.126 *** 2025-09-03 00:38:40.510807 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd05881db-8953-52a0-98ec-dd1036bee846'}}) 2025-09-03 00:38:40.510819 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'}}) 2025-09-03 00:38:40.510830 | orchestrator | 2025-09-03 00:38:40.510841 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-03 00:38:40.510852 | orchestrator | Wednesday 03 September 2025 00:38:35 +0000 (0:00:00.159) 0:00:10.286 *** 2025-09-03 00:38:40.510863 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd05881db-8953-52a0-98ec-dd1036bee846'}})  2025-09-03 00:38:40.510885 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'}})  2025-09-03 00:38:40.510897 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:40.510908 | orchestrator | 2025-09-03 00:38:40.510919 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-03 00:38:40.510930 | orchestrator | Wednesday 03 September 2025 00:38:35 +0000 (0:00:00.143) 0:00:10.429 *** 2025-09-03 00:38:40.510941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd05881db-8953-52a0-98ec-dd1036bee846'}})  2025-09-03 00:38:40.510952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'}})  2025-09-03 00:38:40.510963 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:40.510974 | orchestrator | 2025-09-03 00:38:40.510985 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-03 00:38:40.510995 | orchestrator | Wednesday 03 September 2025 00:38:36 +0000 (0:00:00.332) 0:00:10.762 *** 2025-09-03 00:38:40.511006 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd05881db-8953-52a0-98ec-dd1036bee846'}})  2025-09-03 00:38:40.511017 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'}})  2025-09-03 00:38:40.511028 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:40.511039 | orchestrator | 2025-09-03 00:38:40.511070 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-03 00:38:40.511082 | orchestrator | Wednesday 03 September 2025 00:38:36 +0000 (0:00:00.148) 0:00:10.910 *** 2025-09-03 00:38:40.511093 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:38:40.511104 | orchestrator | 2025-09-03 00:38:40.511115 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-03 00:38:40.511126 | orchestrator | Wednesday 03 September 2025 00:38:36 +0000 (0:00:00.128) 0:00:11.039 *** 2025-09-03 00:38:40.511137 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:38:40.511147 | orchestrator | 2025-09-03 00:38:40.511158 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-03 00:38:40.511188 | orchestrator | Wednesday 03 September 2025 00:38:36 +0000 (0:00:00.148) 0:00:11.188 *** 2025-09-03 00:38:40.511199 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:40.511210 | orchestrator | 2025-09-03 00:38:40.511221 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-03 00:38:40.511232 | orchestrator | Wednesday 03 September 2025 00:38:36 +0000 (0:00:00.127) 0:00:11.315 *** 2025-09-03 00:38:40.511242 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:40.511253 | orchestrator | 2025-09-03 00:38:40.511273 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-03 00:38:40.511284 | orchestrator | Wednesday 03 September 2025 00:38:36 +0000 (0:00:00.137) 0:00:11.453 *** 2025-09-03 00:38:40.511295 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:40.511306 | orchestrator | 2025-09-03 00:38:40.511317 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-03 00:38:40.511327 | orchestrator | Wednesday 03 September 2025 00:38:36 +0000 (0:00:00.135) 0:00:11.589 *** 2025-09-03 00:38:40.511338 | orchestrator | ok: [testbed-node-3] => { 2025-09-03 00:38:40.511349 | orchestrator |  "ceph_osd_devices": { 2025-09-03 00:38:40.511360 | orchestrator |  "sdb": { 2025-09-03 00:38:40.511372 | orchestrator |  "osd_lvm_uuid": "d05881db-8953-52a0-98ec-dd1036bee846" 2025-09-03 00:38:40.511383 | orchestrator |  }, 2025-09-03 00:38:40.511394 | orchestrator |  "sdc": { 2025-09-03 00:38:40.511405 | orchestrator |  "osd_lvm_uuid": "2e5a0ee6-219f-5b14-b340-2bfd497a8fc5" 2025-09-03 00:38:40.511416 | orchestrator |  } 2025-09-03 00:38:40.511427 | orchestrator |  } 2025-09-03 00:38:40.511439 | orchestrator | } 2025-09-03 00:38:40.511450 | orchestrator | 2025-09-03 00:38:40.511461 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-03 00:38:40.511472 | orchestrator | Wednesday 03 September 2025 00:38:37 +0000 (0:00:00.142) 0:00:11.732 *** 2025-09-03 00:38:40.511482 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:40.511493 | orchestrator | 2025-09-03 00:38:40.511504 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-03 00:38:40.511515 | orchestrator | Wednesday 03 September 2025 00:38:37 +0000 (0:00:00.128) 0:00:11.861 *** 2025-09-03 00:38:40.511532 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:40.511543 | orchestrator | 2025-09-03 00:38:40.511554 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-03 00:38:40.511565 | orchestrator | Wednesday 03 September 2025 00:38:37 +0000 (0:00:00.135) 0:00:11.996 *** 2025-09-03 00:38:40.511576 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:38:40.511587 | orchestrator | 2025-09-03 00:38:40.511597 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-03 00:38:40.511608 | orchestrator | Wednesday 03 September 2025 00:38:37 +0000 (0:00:00.152) 0:00:12.149 *** 2025-09-03 00:38:40.511619 | orchestrator | changed: [testbed-node-3] => { 2025-09-03 00:38:40.511630 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-03 00:38:40.511641 | orchestrator |  "ceph_osd_devices": { 2025-09-03 00:38:40.511652 | orchestrator |  "sdb": { 2025-09-03 00:38:40.511663 | orchestrator |  "osd_lvm_uuid": "d05881db-8953-52a0-98ec-dd1036bee846" 2025-09-03 00:38:40.511674 | orchestrator |  }, 2025-09-03 00:38:40.511686 | orchestrator |  "sdc": { 2025-09-03 00:38:40.511697 | orchestrator |  "osd_lvm_uuid": "2e5a0ee6-219f-5b14-b340-2bfd497a8fc5" 2025-09-03 00:38:40.511708 | orchestrator |  } 2025-09-03 00:38:40.511718 | orchestrator |  }, 2025-09-03 00:38:40.511730 | orchestrator |  "lvm_volumes": [ 2025-09-03 00:38:40.511741 | orchestrator |  { 2025-09-03 00:38:40.511752 | orchestrator |  "data": "osd-block-d05881db-8953-52a0-98ec-dd1036bee846", 2025-09-03 00:38:40.511763 | orchestrator |  "data_vg": "ceph-d05881db-8953-52a0-98ec-dd1036bee846" 2025-09-03 00:38:40.511773 | orchestrator |  }, 2025-09-03 00:38:40.511784 | orchestrator |  { 2025-09-03 00:38:40.511795 | orchestrator |  "data": "osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5", 2025-09-03 00:38:40.511806 | orchestrator |  "data_vg": "ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5" 2025-09-03 00:38:40.511817 | orchestrator |  } 2025-09-03 00:38:40.511828 | orchestrator |  ] 2025-09-03 00:38:40.511839 | orchestrator |  } 2025-09-03 00:38:40.511850 | orchestrator | } 2025-09-03 00:38:40.511861 | orchestrator | 2025-09-03 00:38:40.511872 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-03 00:38:40.511890 | orchestrator | Wednesday 03 September 2025 00:38:37 +0000 (0:00:00.223) 0:00:12.372 *** 2025-09-03 00:38:40.511901 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-03 00:38:40.511912 | orchestrator | 2025-09-03 00:38:40.511923 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-03 00:38:40.511933 | orchestrator | 2025-09-03 00:38:40.511944 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-03 00:38:40.511955 | orchestrator | Wednesday 03 September 2025 00:38:40 +0000 (0:00:02.279) 0:00:14.651 *** 2025-09-03 00:38:40.511966 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-03 00:38:40.511977 | orchestrator | 2025-09-03 00:38:40.511987 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-03 00:38:40.511998 | orchestrator | Wednesday 03 September 2025 00:38:40 +0000 (0:00:00.260) 0:00:14.912 *** 2025-09-03 00:38:40.512009 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:38:40.512020 | orchestrator | 2025-09-03 00:38:40.512031 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:40.512048 | orchestrator | Wednesday 03 September 2025 00:38:40 +0000 (0:00:00.219) 0:00:15.132 *** 2025-09-03 00:38:48.134724 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-03 00:38:48.134842 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-03 00:38:48.134857 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-03 00:38:48.134869 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-03 00:38:48.134880 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-03 00:38:48.134892 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-03 00:38:48.134903 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-03 00:38:48.134914 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-03 00:38:48.134925 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-03 00:38:48.134936 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-03 00:38:48.134967 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-03 00:38:48.134979 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-03 00:38:48.134990 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-03 00:38:48.135006 | orchestrator | 2025-09-03 00:38:48.135019 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:48.135031 | orchestrator | Wednesday 03 September 2025 00:38:40 +0000 (0:00:00.378) 0:00:15.510 *** 2025-09-03 00:38:48.135043 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:48.135058 | orchestrator | 2025-09-03 00:38:48.135069 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:48.135080 | orchestrator | Wednesday 03 September 2025 00:38:41 +0000 (0:00:00.199) 0:00:15.710 *** 2025-09-03 00:38:48.135091 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:48.135102 | orchestrator | 2025-09-03 00:38:48.135114 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:48.135125 | orchestrator | Wednesday 03 September 2025 00:38:41 +0000 (0:00:00.191) 0:00:15.901 *** 2025-09-03 00:38:48.135136 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:48.135146 | orchestrator | 2025-09-03 00:38:48.135158 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:48.135205 | orchestrator | Wednesday 03 September 2025 00:38:41 +0000 (0:00:00.202) 0:00:16.104 *** 2025-09-03 00:38:48.135217 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:48.135253 | orchestrator | 2025-09-03 00:38:48.135267 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:48.135280 | orchestrator | Wednesday 03 September 2025 00:38:41 +0000 (0:00:00.202) 0:00:16.306 *** 2025-09-03 00:38:48.135292 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:48.135306 | orchestrator | 2025-09-03 00:38:48.135319 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:48.135331 | orchestrator | Wednesday 03 September 2025 00:38:42 +0000 (0:00:00.614) 0:00:16.920 *** 2025-09-03 00:38:48.135344 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:48.135357 | orchestrator | 2025-09-03 00:38:48.135370 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:48.135383 | orchestrator | Wednesday 03 September 2025 00:38:42 +0000 (0:00:00.186) 0:00:17.107 *** 2025-09-03 00:38:48.135395 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:48.135409 | orchestrator | 2025-09-03 00:38:48.135421 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:48.135433 | orchestrator | Wednesday 03 September 2025 00:38:42 +0000 (0:00:00.193) 0:00:17.300 *** 2025-09-03 00:38:48.135447 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:48.135461 | orchestrator | 2025-09-03 00:38:48.135473 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:48.135486 | orchestrator | Wednesday 03 September 2025 00:38:42 +0000 (0:00:00.203) 0:00:17.503 *** 2025-09-03 00:38:48.135499 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae) 2025-09-03 00:38:48.135513 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae) 2025-09-03 00:38:48.135526 | orchestrator | 2025-09-03 00:38:48.135538 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:48.135551 | orchestrator | Wednesday 03 September 2025 00:38:43 +0000 (0:00:00.422) 0:00:17.926 *** 2025-09-03 00:38:48.135563 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f4ffaa61-7d7a-4b4d-ae66-bf9c1470deb3) 2025-09-03 00:38:48.135576 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f4ffaa61-7d7a-4b4d-ae66-bf9c1470deb3) 2025-09-03 00:38:48.135589 | orchestrator | 2025-09-03 00:38:48.135600 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:48.135611 | orchestrator | Wednesday 03 September 2025 00:38:43 +0000 (0:00:00.417) 0:00:18.343 *** 2025-09-03 00:38:48.135621 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_89937d38-622a-4519-a70d-71f9b6cc380e) 2025-09-03 00:38:48.135632 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_89937d38-622a-4519-a70d-71f9b6cc380e) 2025-09-03 00:38:48.135643 | orchestrator | 2025-09-03 00:38:48.135653 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:48.135664 | orchestrator | Wednesday 03 September 2025 00:38:44 +0000 (0:00:00.406) 0:00:18.750 *** 2025-09-03 00:38:48.135693 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2aa4af3c-ac98-453f-b557-6d0c203c4201) 2025-09-03 00:38:48.135705 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2aa4af3c-ac98-453f-b557-6d0c203c4201) 2025-09-03 00:38:48.135715 | orchestrator | 2025-09-03 00:38:48.135726 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:48.135737 | orchestrator | Wednesday 03 September 2025 00:38:44 +0000 (0:00:00.503) 0:00:19.253 *** 2025-09-03 00:38:48.135748 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-03 00:38:48.135759 | orchestrator | 2025-09-03 00:38:48.135770 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:48.135787 | orchestrator | Wednesday 03 September 2025 00:38:44 +0000 (0:00:00.314) 0:00:19.568 *** 2025-09-03 00:38:48.135798 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-03 00:38:48.135817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-03 00:38:48.135828 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-03 00:38:48.135839 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-03 00:38:48.135850 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-03 00:38:48.135860 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-03 00:38:48.135871 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-03 00:38:48.135882 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-03 00:38:48.135893 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-03 00:38:48.135904 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-03 00:38:48.135914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-03 00:38:48.135925 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-03 00:38:48.135936 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-03 00:38:48.135947 | orchestrator | 2025-09-03 00:38:48.135958 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:48.135968 | orchestrator | Wednesday 03 September 2025 00:38:45 +0000 (0:00:00.401) 0:00:19.970 *** 2025-09-03 00:38:48.135980 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:48.135990 | orchestrator | 2025-09-03 00:38:48.136001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:48.136012 | orchestrator | Wednesday 03 September 2025 00:38:45 +0000 (0:00:00.226) 0:00:20.197 *** 2025-09-03 00:38:48.136023 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:48.136034 | orchestrator | 2025-09-03 00:38:48.136045 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:48.136056 | orchestrator | Wednesday 03 September 2025 00:38:46 +0000 (0:00:00.714) 0:00:20.911 *** 2025-09-03 00:38:48.136067 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:48.136078 | orchestrator | 2025-09-03 00:38:48.136089 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:48.136100 | orchestrator | Wednesday 03 September 2025 00:38:46 +0000 (0:00:00.193) 0:00:21.104 *** 2025-09-03 00:38:48.136111 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:48.136122 | orchestrator | 2025-09-03 00:38:48.136133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:48.136144 | orchestrator | Wednesday 03 September 2025 00:38:46 +0000 (0:00:00.194) 0:00:21.299 *** 2025-09-03 00:38:48.136155 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:48.136186 | orchestrator | 2025-09-03 00:38:48.136197 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:48.136208 | orchestrator | Wednesday 03 September 2025 00:38:46 +0000 (0:00:00.189) 0:00:21.489 *** 2025-09-03 00:38:48.136219 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:48.136229 | orchestrator | 2025-09-03 00:38:48.136240 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:48.136251 | orchestrator | Wednesday 03 September 2025 00:38:47 +0000 (0:00:00.178) 0:00:21.667 *** 2025-09-03 00:38:48.136262 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:48.136272 | orchestrator | 2025-09-03 00:38:48.136283 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:48.136294 | orchestrator | Wednesday 03 September 2025 00:38:47 +0000 (0:00:00.189) 0:00:21.857 *** 2025-09-03 00:38:48.136305 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:48.136315 | orchestrator | 2025-09-03 00:38:48.136326 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:48.136344 | orchestrator | Wednesday 03 September 2025 00:38:47 +0000 (0:00:00.159) 0:00:22.017 *** 2025-09-03 00:38:48.136355 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-03 00:38:48.136366 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-03 00:38:48.136377 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-03 00:38:48.136388 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-03 00:38:48.136399 | orchestrator | 2025-09-03 00:38:48.136410 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:48.136421 | orchestrator | Wednesday 03 September 2025 00:38:47 +0000 (0:00:00.566) 0:00:22.583 *** 2025-09-03 00:38:48.136432 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:48.136443 | orchestrator | 2025-09-03 00:38:48.136460 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:53.122656 | orchestrator | Wednesday 03 September 2025 00:38:48 +0000 (0:00:00.176) 0:00:22.760 *** 2025-09-03 00:38:53.122775 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:53.122793 | orchestrator | 2025-09-03 00:38:53.122807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:53.122819 | orchestrator | Wednesday 03 September 2025 00:38:48 +0000 (0:00:00.168) 0:00:22.929 *** 2025-09-03 00:38:53.122830 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:53.122842 | orchestrator | 2025-09-03 00:38:53.122853 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:38:53.122864 | orchestrator | Wednesday 03 September 2025 00:38:48 +0000 (0:00:00.156) 0:00:23.085 *** 2025-09-03 00:38:53.122875 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:53.122886 | orchestrator | 2025-09-03 00:38:53.122918 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-03 00:38:53.122930 | orchestrator | Wednesday 03 September 2025 00:38:48 +0000 (0:00:00.161) 0:00:23.247 *** 2025-09-03 00:38:53.122941 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-03 00:38:53.122952 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-03 00:38:53.122963 | orchestrator | 2025-09-03 00:38:53.122974 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-03 00:38:53.122985 | orchestrator | Wednesday 03 September 2025 00:38:48 +0000 (0:00:00.279) 0:00:23.526 *** 2025-09-03 00:38:53.122996 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:53.123007 | orchestrator | 2025-09-03 00:38:53.123018 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-03 00:38:53.123029 | orchestrator | Wednesday 03 September 2025 00:38:48 +0000 (0:00:00.097) 0:00:23.624 *** 2025-09-03 00:38:53.123040 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:53.123052 | orchestrator | 2025-09-03 00:38:53.123063 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-03 00:38:53.123074 | orchestrator | Wednesday 03 September 2025 00:38:49 +0000 (0:00:00.091) 0:00:23.715 *** 2025-09-03 00:38:53.123084 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:53.123095 | orchestrator | 2025-09-03 00:38:53.123106 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-03 00:38:53.123117 | orchestrator | Wednesday 03 September 2025 00:38:49 +0000 (0:00:00.093) 0:00:23.808 *** 2025-09-03 00:38:53.123128 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:38:53.123140 | orchestrator | 2025-09-03 00:38:53.123151 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-03 00:38:53.123201 | orchestrator | Wednesday 03 September 2025 00:38:49 +0000 (0:00:00.092) 0:00:23.901 *** 2025-09-03 00:38:53.123218 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '400ae980-4c36-5b9b-960d-631158f9c2c9'}}) 2025-09-03 00:38:53.123234 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1107a6cb-8e5a-5215-8b60-1d473d685075'}}) 2025-09-03 00:38:53.123247 | orchestrator | 2025-09-03 00:38:53.123260 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-03 00:38:53.123341 | orchestrator | Wednesday 03 September 2025 00:38:49 +0000 (0:00:00.117) 0:00:24.018 *** 2025-09-03 00:38:53.123363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '400ae980-4c36-5b9b-960d-631158f9c2c9'}})  2025-09-03 00:38:53.123384 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1107a6cb-8e5a-5215-8b60-1d473d685075'}})  2025-09-03 00:38:53.123405 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:53.123426 | orchestrator | 2025-09-03 00:38:53.123446 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-03 00:38:53.123465 | orchestrator | Wednesday 03 September 2025 00:38:49 +0000 (0:00:00.104) 0:00:24.123 *** 2025-09-03 00:38:53.123479 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '400ae980-4c36-5b9b-960d-631158f9c2c9'}})  2025-09-03 00:38:53.123492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1107a6cb-8e5a-5215-8b60-1d473d685075'}})  2025-09-03 00:38:53.123505 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:53.123518 | orchestrator | 2025-09-03 00:38:53.123531 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-03 00:38:53.123542 | orchestrator | Wednesday 03 September 2025 00:38:49 +0000 (0:00:00.126) 0:00:24.249 *** 2025-09-03 00:38:53.123553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '400ae980-4c36-5b9b-960d-631158f9c2c9'}})  2025-09-03 00:38:53.123564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1107a6cb-8e5a-5215-8b60-1d473d685075'}})  2025-09-03 00:38:53.123575 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:53.123586 | orchestrator | 2025-09-03 00:38:53.123597 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-03 00:38:53.123608 | orchestrator | Wednesday 03 September 2025 00:38:49 +0000 (0:00:00.125) 0:00:24.375 *** 2025-09-03 00:38:53.123619 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:38:53.123630 | orchestrator | 2025-09-03 00:38:53.123641 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-03 00:38:53.123652 | orchestrator | Wednesday 03 September 2025 00:38:49 +0000 (0:00:00.138) 0:00:24.514 *** 2025-09-03 00:38:53.123663 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:38:53.123674 | orchestrator | 2025-09-03 00:38:53.123685 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-03 00:38:53.123696 | orchestrator | Wednesday 03 September 2025 00:38:50 +0000 (0:00:00.145) 0:00:24.659 *** 2025-09-03 00:38:53.123707 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:53.123718 | orchestrator | 2025-09-03 00:38:53.123746 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-03 00:38:53.123758 | orchestrator | Wednesday 03 September 2025 00:38:50 +0000 (0:00:00.107) 0:00:24.767 *** 2025-09-03 00:38:53.123769 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:53.123780 | orchestrator | 2025-09-03 00:38:53.123791 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-03 00:38:53.123802 | orchestrator | Wednesday 03 September 2025 00:38:50 +0000 (0:00:00.271) 0:00:25.038 *** 2025-09-03 00:38:53.123813 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:53.123824 | orchestrator | 2025-09-03 00:38:53.123835 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-03 00:38:53.123846 | orchestrator | Wednesday 03 September 2025 00:38:50 +0000 (0:00:00.112) 0:00:25.150 *** 2025-09-03 00:38:53.123857 | orchestrator | ok: [testbed-node-4] => { 2025-09-03 00:38:53.123868 | orchestrator |  "ceph_osd_devices": { 2025-09-03 00:38:53.123880 | orchestrator |  "sdb": { 2025-09-03 00:38:53.123892 | orchestrator |  "osd_lvm_uuid": "400ae980-4c36-5b9b-960d-631158f9c2c9" 2025-09-03 00:38:53.123903 | orchestrator |  }, 2025-09-03 00:38:53.123914 | orchestrator |  "sdc": { 2025-09-03 00:38:53.123936 | orchestrator |  "osd_lvm_uuid": "1107a6cb-8e5a-5215-8b60-1d473d685075" 2025-09-03 00:38:53.123947 | orchestrator |  } 2025-09-03 00:38:53.123958 | orchestrator |  } 2025-09-03 00:38:53.123970 | orchestrator | } 2025-09-03 00:38:53.123981 | orchestrator | 2025-09-03 00:38:53.123992 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-03 00:38:53.124004 | orchestrator | Wednesday 03 September 2025 00:38:50 +0000 (0:00:00.122) 0:00:25.273 *** 2025-09-03 00:38:53.124015 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:53.124026 | orchestrator | 2025-09-03 00:38:53.124044 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-03 00:38:53.124055 | orchestrator | Wednesday 03 September 2025 00:38:50 +0000 (0:00:00.117) 0:00:25.391 *** 2025-09-03 00:38:53.124066 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:53.124077 | orchestrator | 2025-09-03 00:38:53.124088 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-03 00:38:53.124099 | orchestrator | Wednesday 03 September 2025 00:38:50 +0000 (0:00:00.114) 0:00:25.506 *** 2025-09-03 00:38:53.124110 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:38:53.124121 | orchestrator | 2025-09-03 00:38:53.124132 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-03 00:38:53.124143 | orchestrator | Wednesday 03 September 2025 00:38:50 +0000 (0:00:00.109) 0:00:25.615 *** 2025-09-03 00:38:53.124153 | orchestrator | changed: [testbed-node-4] => { 2025-09-03 00:38:53.124189 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-03 00:38:53.124205 | orchestrator |  "ceph_osd_devices": { 2025-09-03 00:38:53.124216 | orchestrator |  "sdb": { 2025-09-03 00:38:53.124228 | orchestrator |  "osd_lvm_uuid": "400ae980-4c36-5b9b-960d-631158f9c2c9" 2025-09-03 00:38:53.124244 | orchestrator |  }, 2025-09-03 00:38:53.124256 | orchestrator |  "sdc": { 2025-09-03 00:38:53.124267 | orchestrator |  "osd_lvm_uuid": "1107a6cb-8e5a-5215-8b60-1d473d685075" 2025-09-03 00:38:53.124278 | orchestrator |  } 2025-09-03 00:38:53.124289 | orchestrator |  }, 2025-09-03 00:38:53.124300 | orchestrator |  "lvm_volumes": [ 2025-09-03 00:38:53.124311 | orchestrator |  { 2025-09-03 00:38:53.124322 | orchestrator |  "data": "osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9", 2025-09-03 00:38:53.124333 | orchestrator |  "data_vg": "ceph-400ae980-4c36-5b9b-960d-631158f9c2c9" 2025-09-03 00:38:53.124344 | orchestrator |  }, 2025-09-03 00:38:53.124355 | orchestrator |  { 2025-09-03 00:38:53.124366 | orchestrator |  "data": "osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075", 2025-09-03 00:38:53.124377 | orchestrator |  "data_vg": "ceph-1107a6cb-8e5a-5215-8b60-1d473d685075" 2025-09-03 00:38:53.124394 | orchestrator |  } 2025-09-03 00:38:53.124414 | orchestrator |  ] 2025-09-03 00:38:53.124434 | orchestrator |  } 2025-09-03 00:38:53.124454 | orchestrator | } 2025-09-03 00:38:53.124473 | orchestrator | 2025-09-03 00:38:53.124487 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-03 00:38:53.124498 | orchestrator | Wednesday 03 September 2025 00:38:51 +0000 (0:00:00.169) 0:00:25.784 *** 2025-09-03 00:38:53.124508 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-03 00:38:53.124519 | orchestrator | 2025-09-03 00:38:53.124530 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-03 00:38:53.124540 | orchestrator | 2025-09-03 00:38:53.124551 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-03 00:38:53.124561 | orchestrator | Wednesday 03 September 2025 00:38:51 +0000 (0:00:00.840) 0:00:26.625 *** 2025-09-03 00:38:53.124572 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-03 00:38:53.124583 | orchestrator | 2025-09-03 00:38:53.124593 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-03 00:38:53.124604 | orchestrator | Wednesday 03 September 2025 00:38:52 +0000 (0:00:00.333) 0:00:26.958 *** 2025-09-03 00:38:53.124623 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:38:53.124634 | orchestrator | 2025-09-03 00:38:53.124645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:38:53.124656 | orchestrator | Wednesday 03 September 2025 00:38:52 +0000 (0:00:00.470) 0:00:27.428 *** 2025-09-03 00:38:53.124667 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-03 00:38:53.124677 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-03 00:38:53.124688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-03 00:38:53.124699 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-03 00:38:53.124710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-03 00:38:53.124720 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-03 00:38:53.124739 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-03 00:39:00.521020 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-03 00:39:00.521139 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-03 00:39:00.521154 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-03 00:39:00.521225 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-03 00:39:00.521238 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-03 00:39:00.521249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-03 00:39:00.521261 | orchestrator | 2025-09-03 00:39:00.521274 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:39:00.521286 | orchestrator | Wednesday 03 September 2025 00:38:53 +0000 (0:00:00.316) 0:00:27.744 *** 2025-09-03 00:39:00.521298 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:00.521312 | orchestrator | 2025-09-03 00:39:00.521323 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:39:00.521335 | orchestrator | Wednesday 03 September 2025 00:38:53 +0000 (0:00:00.206) 0:00:27.951 *** 2025-09-03 00:39:00.521346 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:00.521357 | orchestrator | 2025-09-03 00:39:00.521368 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:39:00.521380 | orchestrator | Wednesday 03 September 2025 00:38:53 +0000 (0:00:00.161) 0:00:28.113 *** 2025-09-03 00:39:00.521391 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:00.521402 | orchestrator | 2025-09-03 00:39:00.521413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:39:00.521424 | orchestrator | Wednesday 03 September 2025 00:38:53 +0000 (0:00:00.172) 0:00:28.286 *** 2025-09-03 00:39:00.521436 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:00.521447 | orchestrator | 2025-09-03 00:39:00.521458 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:39:00.521469 | orchestrator | Wednesday 03 September 2025 00:38:53 +0000 (0:00:00.156) 0:00:28.443 *** 2025-09-03 00:39:00.521481 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:00.521492 | orchestrator | 2025-09-03 00:39:00.521503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:39:00.521514 | orchestrator | Wednesday 03 September 2025 00:38:54 +0000 (0:00:00.201) 0:00:28.645 *** 2025-09-03 00:39:00.521525 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:00.521537 | orchestrator | 2025-09-03 00:39:00.521550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:39:00.521563 | orchestrator | Wednesday 03 September 2025 00:38:54 +0000 (0:00:00.147) 0:00:28.793 *** 2025-09-03 00:39:00.521576 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:00.521617 | orchestrator | 2025-09-03 00:39:00.521632 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:39:00.521646 | orchestrator | Wednesday 03 September 2025 00:38:54 +0000 (0:00:00.174) 0:00:28.967 *** 2025-09-03 00:39:00.521657 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:00.521668 | orchestrator | 2025-09-03 00:39:00.521696 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:39:00.521708 | orchestrator | Wednesday 03 September 2025 00:38:54 +0000 (0:00:00.181) 0:00:29.149 *** 2025-09-03 00:39:00.521720 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a) 2025-09-03 00:39:00.521733 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a) 2025-09-03 00:39:00.521744 | orchestrator | 2025-09-03 00:39:00.521755 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:39:00.521767 | orchestrator | Wednesday 03 September 2025 00:38:54 +0000 (0:00:00.451) 0:00:29.600 *** 2025-09-03 00:39:00.521778 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_409307c9-8e7f-483b-a404-5462fce46233) 2025-09-03 00:39:00.521789 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_409307c9-8e7f-483b-a404-5462fce46233) 2025-09-03 00:39:00.521800 | orchestrator | 2025-09-03 00:39:00.521811 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:39:00.521822 | orchestrator | Wednesday 03 September 2025 00:38:55 +0000 (0:00:00.624) 0:00:30.225 *** 2025-09-03 00:39:00.521833 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ce19fbd3-6a41-4577-8f91-9183654abf8c) 2025-09-03 00:39:00.521844 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ce19fbd3-6a41-4577-8f91-9183654abf8c) 2025-09-03 00:39:00.521855 | orchestrator | 2025-09-03 00:39:00.521866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:39:00.521878 | orchestrator | Wednesday 03 September 2025 00:38:56 +0000 (0:00:00.453) 0:00:30.678 *** 2025-09-03 00:39:00.521888 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d4852aea-51af-4111-8e77-3990a105da37) 2025-09-03 00:39:00.521900 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d4852aea-51af-4111-8e77-3990a105da37) 2025-09-03 00:39:00.521911 | orchestrator | 2025-09-03 00:39:00.521921 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:39:00.521932 | orchestrator | Wednesday 03 September 2025 00:38:56 +0000 (0:00:00.416) 0:00:31.094 *** 2025-09-03 00:39:00.521943 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-03 00:39:00.521954 | orchestrator | 2025-09-03 00:39:00.521965 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:39:00.521976 | orchestrator | Wednesday 03 September 2025 00:38:56 +0000 (0:00:00.312) 0:00:31.407 *** 2025-09-03 00:39:00.522006 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-03 00:39:00.522086 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-03 00:39:00.522099 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-03 00:39:00.522110 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-03 00:39:00.522121 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-03 00:39:00.522131 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-03 00:39:00.522142 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-03 00:39:00.522153 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-03 00:39:00.522186 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-03 00:39:00.522210 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-03 00:39:00.522221 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-03 00:39:00.522232 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-03 00:39:00.522244 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-03 00:39:00.522255 | orchestrator | 2025-09-03 00:39:00.522266 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:39:00.522277 | orchestrator | Wednesday 03 September 2025 00:38:57 +0000 (0:00:00.354) 0:00:31.761 *** 2025-09-03 00:39:00.522288 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:00.522300 | orchestrator | 2025-09-03 00:39:00.522311 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:39:00.522322 | orchestrator | Wednesday 03 September 2025 00:38:57 +0000 (0:00:00.232) 0:00:31.994 *** 2025-09-03 00:39:00.522333 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:00.522344 | orchestrator | 2025-09-03 00:39:00.522355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:39:00.522366 | orchestrator | Wednesday 03 September 2025 00:38:57 +0000 (0:00:00.191) 0:00:32.186 *** 2025-09-03 00:39:00.522377 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:00.522388 | orchestrator | 2025-09-03 00:39:00.522400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:39:00.522411 | orchestrator | Wednesday 03 September 2025 00:38:57 +0000 (0:00:00.185) 0:00:32.372 *** 2025-09-03 00:39:00.522422 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:00.522433 | orchestrator | 2025-09-03 00:39:00.522444 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:39:00.522455 | orchestrator | Wednesday 03 September 2025 00:38:57 +0000 (0:00:00.184) 0:00:32.556 *** 2025-09-03 00:39:00.522466 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:00.522477 | orchestrator | 2025-09-03 00:39:00.522488 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:39:00.522499 | orchestrator | Wednesday 03 September 2025 00:38:58 +0000 (0:00:00.174) 0:00:32.730 *** 2025-09-03 00:39:00.522510 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:00.522521 | orchestrator | 2025-09-03 00:39:00.522533 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:39:00.522544 | orchestrator | Wednesday 03 September 2025 00:38:58 +0000 (0:00:00.483) 0:00:33.214 *** 2025-09-03 00:39:00.522555 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:00.522566 | orchestrator | 2025-09-03 00:39:00.522577 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:39:00.522588 | orchestrator | Wednesday 03 September 2025 00:38:58 +0000 (0:00:00.200) 0:00:33.415 *** 2025-09-03 00:39:00.522599 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:00.522610 | orchestrator | 2025-09-03 00:39:00.522621 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:39:00.522632 | orchestrator | Wednesday 03 September 2025 00:38:58 +0000 (0:00:00.184) 0:00:33.599 *** 2025-09-03 00:39:00.522643 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-03 00:39:00.522655 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-03 00:39:00.522666 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-03 00:39:00.522677 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-03 00:39:00.522688 | orchestrator | 2025-09-03 00:39:00.522700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:39:00.522711 | orchestrator | Wednesday 03 September 2025 00:38:59 +0000 (0:00:00.756) 0:00:34.356 *** 2025-09-03 00:39:00.522722 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:00.522733 | orchestrator | 2025-09-03 00:39:00.522744 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:39:00.522762 | orchestrator | Wednesday 03 September 2025 00:38:59 +0000 (0:00:00.171) 0:00:34.527 *** 2025-09-03 00:39:00.522774 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:00.522785 | orchestrator | 2025-09-03 00:39:00.522796 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:39:00.522807 | orchestrator | Wednesday 03 September 2025 00:39:00 +0000 (0:00:00.203) 0:00:34.731 *** 2025-09-03 00:39:00.522818 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:00.522829 | orchestrator | 2025-09-03 00:39:00.522840 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:39:00.522851 | orchestrator | Wednesday 03 September 2025 00:39:00 +0000 (0:00:00.218) 0:00:34.949 *** 2025-09-03 00:39:00.522869 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:00.522881 | orchestrator | 2025-09-03 00:39:00.522892 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-03 00:39:00.522910 | orchestrator | Wednesday 03 September 2025 00:39:00 +0000 (0:00:00.193) 0:00:35.143 *** 2025-09-03 00:39:04.586015 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-03 00:39:04.586255 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-03 00:39:04.586273 | orchestrator | 2025-09-03 00:39:04.586287 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-03 00:39:04.586299 | orchestrator | Wednesday 03 September 2025 00:39:00 +0000 (0:00:00.171) 0:00:35.314 *** 2025-09-03 00:39:04.586311 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:04.586324 | orchestrator | 2025-09-03 00:39:04.586335 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-03 00:39:04.586346 | orchestrator | Wednesday 03 September 2025 00:39:00 +0000 (0:00:00.117) 0:00:35.432 *** 2025-09-03 00:39:04.586357 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:04.586368 | orchestrator | 2025-09-03 00:39:04.586380 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-03 00:39:04.586391 | orchestrator | Wednesday 03 September 2025 00:39:00 +0000 (0:00:00.104) 0:00:35.536 *** 2025-09-03 00:39:04.586402 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:04.586413 | orchestrator | 2025-09-03 00:39:04.586425 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-03 00:39:04.586435 | orchestrator | Wednesday 03 September 2025 00:39:01 +0000 (0:00:00.107) 0:00:35.643 *** 2025-09-03 00:39:04.586447 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:39:04.586458 | orchestrator | 2025-09-03 00:39:04.586469 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-03 00:39:04.586480 | orchestrator | Wednesday 03 September 2025 00:39:01 +0000 (0:00:00.292) 0:00:35.936 *** 2025-09-03 00:39:04.586492 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e75c81d9-f6c1-538f-9534-cc9e3445127a'}}) 2025-09-03 00:39:04.586504 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '634e15af-8858-53e6-9f62-917e12b08878'}}) 2025-09-03 00:39:04.586515 | orchestrator | 2025-09-03 00:39:04.586526 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-03 00:39:04.586537 | orchestrator | Wednesday 03 September 2025 00:39:01 +0000 (0:00:00.170) 0:00:36.106 *** 2025-09-03 00:39:04.586549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e75c81d9-f6c1-538f-9534-cc9e3445127a'}})  2025-09-03 00:39:04.586562 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '634e15af-8858-53e6-9f62-917e12b08878'}})  2025-09-03 00:39:04.586573 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:04.586584 | orchestrator | 2025-09-03 00:39:04.586614 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-03 00:39:04.586626 | orchestrator | Wednesday 03 September 2025 00:39:01 +0000 (0:00:00.138) 0:00:36.245 *** 2025-09-03 00:39:04.586637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e75c81d9-f6c1-538f-9534-cc9e3445127a'}})  2025-09-03 00:39:04.586673 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '634e15af-8858-53e6-9f62-917e12b08878'}})  2025-09-03 00:39:04.586685 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:04.586696 | orchestrator | 2025-09-03 00:39:04.586707 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-03 00:39:04.586718 | orchestrator | Wednesday 03 September 2025 00:39:01 +0000 (0:00:00.146) 0:00:36.391 *** 2025-09-03 00:39:04.586729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e75c81d9-f6c1-538f-9534-cc9e3445127a'}})  2025-09-03 00:39:04.586740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '634e15af-8858-53e6-9f62-917e12b08878'}})  2025-09-03 00:39:04.586752 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:04.586763 | orchestrator | 2025-09-03 00:39:04.586774 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-03 00:39:04.586785 | orchestrator | Wednesday 03 September 2025 00:39:01 +0000 (0:00:00.167) 0:00:36.559 *** 2025-09-03 00:39:04.586796 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:39:04.586807 | orchestrator | 2025-09-03 00:39:04.586818 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-03 00:39:04.586829 | orchestrator | Wednesday 03 September 2025 00:39:02 +0000 (0:00:00.152) 0:00:36.712 *** 2025-09-03 00:39:04.586840 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:39:04.586851 | orchestrator | 2025-09-03 00:39:04.586862 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-03 00:39:04.586873 | orchestrator | Wednesday 03 September 2025 00:39:02 +0000 (0:00:00.138) 0:00:36.850 *** 2025-09-03 00:39:04.586884 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:04.586894 | orchestrator | 2025-09-03 00:39:04.586906 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-03 00:39:04.586917 | orchestrator | Wednesday 03 September 2025 00:39:02 +0000 (0:00:00.196) 0:00:37.046 *** 2025-09-03 00:39:04.586927 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:04.586938 | orchestrator | 2025-09-03 00:39:04.586949 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-03 00:39:04.586960 | orchestrator | Wednesday 03 September 2025 00:39:02 +0000 (0:00:00.197) 0:00:37.244 *** 2025-09-03 00:39:04.586971 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:04.586982 | orchestrator | 2025-09-03 00:39:04.586993 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-03 00:39:04.587004 | orchestrator | Wednesday 03 September 2025 00:39:02 +0000 (0:00:00.152) 0:00:37.396 *** 2025-09-03 00:39:04.587015 | orchestrator | ok: [testbed-node-5] => { 2025-09-03 00:39:04.587026 | orchestrator |  "ceph_osd_devices": { 2025-09-03 00:39:04.587038 | orchestrator |  "sdb": { 2025-09-03 00:39:04.587050 | orchestrator |  "osd_lvm_uuid": "e75c81d9-f6c1-538f-9534-cc9e3445127a" 2025-09-03 00:39:04.587078 | orchestrator |  }, 2025-09-03 00:39:04.587090 | orchestrator |  "sdc": { 2025-09-03 00:39:04.587101 | orchestrator |  "osd_lvm_uuid": "634e15af-8858-53e6-9f62-917e12b08878" 2025-09-03 00:39:04.587113 | orchestrator |  } 2025-09-03 00:39:04.587124 | orchestrator |  } 2025-09-03 00:39:04.587136 | orchestrator | } 2025-09-03 00:39:04.587148 | orchestrator | 2025-09-03 00:39:04.587178 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-03 00:39:04.587190 | orchestrator | Wednesday 03 September 2025 00:39:02 +0000 (0:00:00.110) 0:00:37.507 *** 2025-09-03 00:39:04.587201 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:04.587212 | orchestrator | 2025-09-03 00:39:04.587223 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-03 00:39:04.587234 | orchestrator | Wednesday 03 September 2025 00:39:02 +0000 (0:00:00.111) 0:00:37.619 *** 2025-09-03 00:39:04.587245 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:04.587256 | orchestrator | 2025-09-03 00:39:04.587267 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-03 00:39:04.587285 | orchestrator | Wednesday 03 September 2025 00:39:03 +0000 (0:00:00.280) 0:00:37.900 *** 2025-09-03 00:39:04.587297 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:39:04.587307 | orchestrator | 2025-09-03 00:39:04.587318 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-03 00:39:04.587329 | orchestrator | Wednesday 03 September 2025 00:39:03 +0000 (0:00:00.161) 0:00:38.062 *** 2025-09-03 00:39:04.587340 | orchestrator | changed: [testbed-node-5] => { 2025-09-03 00:39:04.587351 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-03 00:39:04.587362 | orchestrator |  "ceph_osd_devices": { 2025-09-03 00:39:04.587373 | orchestrator |  "sdb": { 2025-09-03 00:39:04.587384 | orchestrator |  "osd_lvm_uuid": "e75c81d9-f6c1-538f-9534-cc9e3445127a" 2025-09-03 00:39:04.587395 | orchestrator |  }, 2025-09-03 00:39:04.587407 | orchestrator |  "sdc": { 2025-09-03 00:39:04.587418 | orchestrator |  "osd_lvm_uuid": "634e15af-8858-53e6-9f62-917e12b08878" 2025-09-03 00:39:04.587428 | orchestrator |  } 2025-09-03 00:39:04.587440 | orchestrator |  }, 2025-09-03 00:39:04.587451 | orchestrator |  "lvm_volumes": [ 2025-09-03 00:39:04.587462 | orchestrator |  { 2025-09-03 00:39:04.587473 | orchestrator |  "data": "osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a", 2025-09-03 00:39:04.587484 | orchestrator |  "data_vg": "ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a" 2025-09-03 00:39:04.587495 | orchestrator |  }, 2025-09-03 00:39:04.587505 | orchestrator |  { 2025-09-03 00:39:04.587516 | orchestrator |  "data": "osd-block-634e15af-8858-53e6-9f62-917e12b08878", 2025-09-03 00:39:04.587528 | orchestrator |  "data_vg": "ceph-634e15af-8858-53e6-9f62-917e12b08878" 2025-09-03 00:39:04.587539 | orchestrator |  } 2025-09-03 00:39:04.587550 | orchestrator |  ] 2025-09-03 00:39:04.587561 | orchestrator |  } 2025-09-03 00:39:04.587577 | orchestrator | } 2025-09-03 00:39:04.587589 | orchestrator | 2025-09-03 00:39:04.587600 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-03 00:39:04.587611 | orchestrator | Wednesday 03 September 2025 00:39:03 +0000 (0:00:00.198) 0:00:38.260 *** 2025-09-03 00:39:04.587622 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-03 00:39:04.587633 | orchestrator | 2025-09-03 00:39:04.587644 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:39:04.587663 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-03 00:39:04.587675 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-03 00:39:04.587686 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-03 00:39:04.587698 | orchestrator | 2025-09-03 00:39:04.587709 | orchestrator | 2025-09-03 00:39:04.587719 | orchestrator | 2025-09-03 00:39:04.587730 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:39:04.587741 | orchestrator | Wednesday 03 September 2025 00:39:04 +0000 (0:00:00.934) 0:00:39.195 *** 2025-09-03 00:39:04.587752 | orchestrator | =============================================================================== 2025-09-03 00:39:04.587763 | orchestrator | Write configuration file ------------------------------------------------ 4.05s 2025-09-03 00:39:04.587774 | orchestrator | Add known partitions to the list of available block devices ------------- 1.13s 2025-09-03 00:39:04.587785 | orchestrator | Add known links to the list of available block devices ------------------ 1.08s 2025-09-03 00:39:04.587796 | orchestrator | Add known partitions to the list of available block devices ------------- 1.00s 2025-09-03 00:39:04.587807 | orchestrator | Get initial list of available block devices ----------------------------- 0.91s 2025-09-03 00:39:04.587824 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.84s 2025-09-03 00:39:04.587835 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2025-09-03 00:39:04.587846 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2025-09-03 00:39:04.587857 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2025-09-03 00:39:04.587868 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-09-03 00:39:04.587879 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-09-03 00:39:04.587890 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.61s 2025-09-03 00:39:04.587901 | orchestrator | Set WAL devices config data --------------------------------------------- 0.61s 2025-09-03 00:39:04.587912 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.61s 2025-09-03 00:39:04.587930 | orchestrator | Print configuration data ------------------------------------------------ 0.59s 2025-09-03 00:39:04.853043 | orchestrator | Add known links to the list of available block devices ------------------ 0.59s 2025-09-03 00:39:04.853151 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2025-09-03 00:39:04.853221 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2025-09-03 00:39:04.853233 | orchestrator | Print DB devices -------------------------------------------------------- 0.53s 2025-09-03 00:39:04.853245 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.53s 2025-09-03 00:39:27.397052 | orchestrator | 2025-09-03 00:39:27 | INFO  | Task cab29c7a-dad9-4ab7-a303-3c87ab31de3e (sync inventory) is running in background. Output coming soon. 2025-09-03 00:39:50.008187 | orchestrator | 2025-09-03 00:39:28 | INFO  | Starting group_vars file reorganization 2025-09-03 00:39:50.008319 | orchestrator | 2025-09-03 00:39:28 | INFO  | Moved 0 file(s) to their respective directories 2025-09-03 00:39:50.008338 | orchestrator | 2025-09-03 00:39:28 | INFO  | Group_vars file reorganization completed 2025-09-03 00:39:50.008350 | orchestrator | 2025-09-03 00:39:30 | INFO  | Starting variable preparation from inventory 2025-09-03 00:39:50.008362 | orchestrator | 2025-09-03 00:39:33 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-03 00:39:50.008373 | orchestrator | 2025-09-03 00:39:33 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-03 00:39:50.008384 | orchestrator | 2025-09-03 00:39:33 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-03 00:39:50.008395 | orchestrator | 2025-09-03 00:39:33 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-03 00:39:50.008406 | orchestrator | 2025-09-03 00:39:33 | INFO  | Variable preparation completed 2025-09-03 00:39:50.008418 | orchestrator | 2025-09-03 00:39:34 | INFO  | Starting inventory overwrite handling 2025-09-03 00:39:50.008429 | orchestrator | 2025-09-03 00:39:34 | INFO  | Handling group overwrites in 99-overwrite 2025-09-03 00:39:50.008441 | orchestrator | 2025-09-03 00:39:34 | INFO  | Removing group frr:children from 60-generic 2025-09-03 00:39:50.008452 | orchestrator | 2025-09-03 00:39:34 | INFO  | Removing group storage:children from 50-kolla 2025-09-03 00:39:50.008463 | orchestrator | 2025-09-03 00:39:34 | INFO  | Removing group netbird:children from 50-infrastruture 2025-09-03 00:39:50.008474 | orchestrator | 2025-09-03 00:39:34 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-03 00:39:50.008485 | orchestrator | 2025-09-03 00:39:34 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-03 00:39:50.008496 | orchestrator | 2025-09-03 00:39:34 | INFO  | Handling group overwrites in 20-roles 2025-09-03 00:39:50.008507 | orchestrator | 2025-09-03 00:39:34 | INFO  | Removing group k3s_node from 50-infrastruture 2025-09-03 00:39:50.008545 | orchestrator | 2025-09-03 00:39:34 | INFO  | Removed 6 group(s) in total 2025-09-03 00:39:50.008557 | orchestrator | 2025-09-03 00:39:34 | INFO  | Inventory overwrite handling completed 2025-09-03 00:39:50.008568 | orchestrator | 2025-09-03 00:39:35 | INFO  | Starting merge of inventory files 2025-09-03 00:39:50.008579 | orchestrator | 2025-09-03 00:39:35 | INFO  | Inventory files merged successfully 2025-09-03 00:39:50.008589 | orchestrator | 2025-09-03 00:39:40 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-03 00:39:50.008600 | orchestrator | 2025-09-03 00:39:48 | INFO  | Successfully wrote ClusterShell configuration 2025-09-03 00:39:50.008612 | orchestrator | [master 9085ca2] 2025-09-03-00-39 2025-09-03 00:39:50.008625 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-03 00:39:51.877442 | orchestrator | 2025-09-03 00:39:51 | INFO  | Task 2ddcf381-50bd-44cf-8106-11247d57705e (ceph-create-lvm-devices) was prepared for execution. 2025-09-03 00:39:51.877547 | orchestrator | 2025-09-03 00:39:51 | INFO  | It takes a moment until task 2ddcf381-50bd-44cf-8106-11247d57705e (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-03 00:40:02.500817 | orchestrator | 2025-09-03 00:40:02.500945 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-03 00:40:02.500963 | orchestrator | 2025-09-03 00:40:02.500975 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-03 00:40:02.500988 | orchestrator | Wednesday 03 September 2025 00:39:55 +0000 (0:00:00.313) 0:00:00.313 *** 2025-09-03 00:40:02.501000 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-03 00:40:02.501012 | orchestrator | 2025-09-03 00:40:02.501023 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-03 00:40:02.501034 | orchestrator | Wednesday 03 September 2025 00:39:55 +0000 (0:00:00.205) 0:00:00.518 *** 2025-09-03 00:40:02.501046 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:40:02.501060 | orchestrator | 2025-09-03 00:40:02.501071 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:02.501082 | orchestrator | Wednesday 03 September 2025 00:39:56 +0000 (0:00:00.197) 0:00:00.716 *** 2025-09-03 00:40:02.501108 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-03 00:40:02.501122 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-03 00:40:02.501178 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-03 00:40:02.501190 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-03 00:40:02.501201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-03 00:40:02.501212 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-03 00:40:02.501223 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-03 00:40:02.501234 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-03 00:40:02.501246 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-03 00:40:02.501257 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-03 00:40:02.501268 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-03 00:40:02.501279 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-03 00:40:02.501290 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-03 00:40:02.501301 | orchestrator | 2025-09-03 00:40:02.501312 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:02.501353 | orchestrator | Wednesday 03 September 2025 00:39:56 +0000 (0:00:00.348) 0:00:01.065 *** 2025-09-03 00:40:02.501367 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:02.501380 | orchestrator | 2025-09-03 00:40:02.501393 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:02.501425 | orchestrator | Wednesday 03 September 2025 00:39:56 +0000 (0:00:00.335) 0:00:01.400 *** 2025-09-03 00:40:02.501439 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:02.501452 | orchestrator | 2025-09-03 00:40:02.501465 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:02.501478 | orchestrator | Wednesday 03 September 2025 00:39:56 +0000 (0:00:00.156) 0:00:01.557 *** 2025-09-03 00:40:02.501497 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:02.501511 | orchestrator | 2025-09-03 00:40:02.501524 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:02.501537 | orchestrator | Wednesday 03 September 2025 00:39:57 +0000 (0:00:00.136) 0:00:01.693 *** 2025-09-03 00:40:02.501550 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:02.501563 | orchestrator | 2025-09-03 00:40:02.501576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:02.501589 | orchestrator | Wednesday 03 September 2025 00:39:57 +0000 (0:00:00.165) 0:00:01.858 *** 2025-09-03 00:40:02.501603 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:02.501616 | orchestrator | 2025-09-03 00:40:02.501629 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:02.501642 | orchestrator | Wednesday 03 September 2025 00:39:57 +0000 (0:00:00.174) 0:00:02.033 *** 2025-09-03 00:40:02.501655 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:02.501668 | orchestrator | 2025-09-03 00:40:02.501681 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:02.501694 | orchestrator | Wednesday 03 September 2025 00:39:57 +0000 (0:00:00.182) 0:00:02.216 *** 2025-09-03 00:40:02.501705 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:02.501716 | orchestrator | 2025-09-03 00:40:02.501727 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:02.501738 | orchestrator | Wednesday 03 September 2025 00:39:57 +0000 (0:00:00.181) 0:00:02.398 *** 2025-09-03 00:40:02.501749 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:02.501760 | orchestrator | 2025-09-03 00:40:02.501771 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:02.501782 | orchestrator | Wednesday 03 September 2025 00:39:57 +0000 (0:00:00.186) 0:00:02.585 *** 2025-09-03 00:40:02.501793 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77) 2025-09-03 00:40:02.501805 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77) 2025-09-03 00:40:02.501816 | orchestrator | 2025-09-03 00:40:02.501827 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:02.501839 | orchestrator | Wednesday 03 September 2025 00:39:58 +0000 (0:00:00.388) 0:00:02.973 *** 2025-09-03 00:40:02.501867 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9ba28649-84e7-4d30-a12b-e93c6e95fbcd) 2025-09-03 00:40:02.501880 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9ba28649-84e7-4d30-a12b-e93c6e95fbcd) 2025-09-03 00:40:02.501891 | orchestrator | 2025-09-03 00:40:02.501902 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:02.501913 | orchestrator | Wednesday 03 September 2025 00:39:58 +0000 (0:00:00.395) 0:00:03.369 *** 2025-09-03 00:40:02.501924 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7512b390-1fa3-4840-9943-7c6482fdb145) 2025-09-03 00:40:02.501935 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7512b390-1fa3-4840-9943-7c6482fdb145) 2025-09-03 00:40:02.501946 | orchestrator | 2025-09-03 00:40:02.501957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:02.501976 | orchestrator | Wednesday 03 September 2025 00:39:59 +0000 (0:00:00.606) 0:00:03.976 *** 2025-09-03 00:40:02.501987 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e885087e-46ab-46e4-825b-bdcddcbfdff8) 2025-09-03 00:40:02.501998 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e885087e-46ab-46e4-825b-bdcddcbfdff8) 2025-09-03 00:40:02.502009 | orchestrator | 2025-09-03 00:40:02.502087 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:02.502099 | orchestrator | Wednesday 03 September 2025 00:40:00 +0000 (0:00:00.812) 0:00:04.788 *** 2025-09-03 00:40:02.502110 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-03 00:40:02.502121 | orchestrator | 2025-09-03 00:40:02.502157 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:02.502169 | orchestrator | Wednesday 03 September 2025 00:40:00 +0000 (0:00:00.333) 0:00:05.122 *** 2025-09-03 00:40:02.502180 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-03 00:40:02.502190 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-03 00:40:02.502201 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-03 00:40:02.502212 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-03 00:40:02.502223 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-03 00:40:02.502234 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-03 00:40:02.502245 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-03 00:40:02.502256 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-03 00:40:02.502267 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-03 00:40:02.502277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-03 00:40:02.502288 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-03 00:40:02.502299 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-03 00:40:02.502310 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-03 00:40:02.502321 | orchestrator | 2025-09-03 00:40:02.502332 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:02.502343 | orchestrator | Wednesday 03 September 2025 00:40:00 +0000 (0:00:00.395) 0:00:05.517 *** 2025-09-03 00:40:02.502354 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:02.502365 | orchestrator | 2025-09-03 00:40:02.502376 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:02.502387 | orchestrator | Wednesday 03 September 2025 00:40:01 +0000 (0:00:00.202) 0:00:05.720 *** 2025-09-03 00:40:02.502398 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:02.502408 | orchestrator | 2025-09-03 00:40:02.502419 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:02.502430 | orchestrator | Wednesday 03 September 2025 00:40:01 +0000 (0:00:00.204) 0:00:05.924 *** 2025-09-03 00:40:02.502441 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:02.502452 | orchestrator | 2025-09-03 00:40:02.502463 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:02.502473 | orchestrator | Wednesday 03 September 2025 00:40:01 +0000 (0:00:00.193) 0:00:06.117 *** 2025-09-03 00:40:02.502484 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:02.502495 | orchestrator | 2025-09-03 00:40:02.502506 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:02.502525 | orchestrator | Wednesday 03 September 2025 00:40:01 +0000 (0:00:00.212) 0:00:06.329 *** 2025-09-03 00:40:02.502536 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:02.502547 | orchestrator | 2025-09-03 00:40:02.502558 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:02.502569 | orchestrator | Wednesday 03 September 2025 00:40:01 +0000 (0:00:00.193) 0:00:06.523 *** 2025-09-03 00:40:02.502579 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:02.502590 | orchestrator | 2025-09-03 00:40:02.502601 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:02.502612 | orchestrator | Wednesday 03 September 2025 00:40:02 +0000 (0:00:00.208) 0:00:06.731 *** 2025-09-03 00:40:02.502623 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:02.502634 | orchestrator | 2025-09-03 00:40:02.502645 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:02.502656 | orchestrator | Wednesday 03 September 2025 00:40:02 +0000 (0:00:00.199) 0:00:06.931 *** 2025-09-03 00:40:02.502675 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:10.574244 | orchestrator | 2025-09-03 00:40:10.574365 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:10.574383 | orchestrator | Wednesday 03 September 2025 00:40:02 +0000 (0:00:00.196) 0:00:07.127 *** 2025-09-03 00:40:10.574396 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-03 00:40:10.574411 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-03 00:40:10.574423 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-03 00:40:10.574434 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-03 00:40:10.574445 | orchestrator | 2025-09-03 00:40:10.574456 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:10.574467 | orchestrator | Wednesday 03 September 2025 00:40:03 +0000 (0:00:01.112) 0:00:08.239 *** 2025-09-03 00:40:10.574479 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:10.574490 | orchestrator | 2025-09-03 00:40:10.574501 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:10.574513 | orchestrator | Wednesday 03 September 2025 00:40:03 +0000 (0:00:00.199) 0:00:08.439 *** 2025-09-03 00:40:10.574524 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:10.574534 | orchestrator | 2025-09-03 00:40:10.574546 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:10.574557 | orchestrator | Wednesday 03 September 2025 00:40:03 +0000 (0:00:00.190) 0:00:08.630 *** 2025-09-03 00:40:10.574568 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:10.574579 | orchestrator | 2025-09-03 00:40:10.574590 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:10.574601 | orchestrator | Wednesday 03 September 2025 00:40:04 +0000 (0:00:00.189) 0:00:08.819 *** 2025-09-03 00:40:10.574612 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:10.574623 | orchestrator | 2025-09-03 00:40:10.574635 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-03 00:40:10.574649 | orchestrator | Wednesday 03 September 2025 00:40:04 +0000 (0:00:00.175) 0:00:08.995 *** 2025-09-03 00:40:10.574661 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:10.574674 | orchestrator | 2025-09-03 00:40:10.574687 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-03 00:40:10.574700 | orchestrator | Wednesday 03 September 2025 00:40:04 +0000 (0:00:00.119) 0:00:09.115 *** 2025-09-03 00:40:10.574713 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd05881db-8953-52a0-98ec-dd1036bee846'}}) 2025-09-03 00:40:10.574727 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'}}) 2025-09-03 00:40:10.574739 | orchestrator | 2025-09-03 00:40:10.574752 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-03 00:40:10.574765 | orchestrator | Wednesday 03 September 2025 00:40:04 +0000 (0:00:00.180) 0:00:09.295 *** 2025-09-03 00:40:10.574779 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d05881db-8953-52a0-98ec-dd1036bee846', 'data_vg': 'ceph-d05881db-8953-52a0-98ec-dd1036bee846'}) 2025-09-03 00:40:10.574817 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5', 'data_vg': 'ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'}) 2025-09-03 00:40:10.574831 | orchestrator | 2025-09-03 00:40:10.574862 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-03 00:40:10.574883 | orchestrator | Wednesday 03 September 2025 00:40:06 +0000 (0:00:01.995) 0:00:11.290 *** 2025-09-03 00:40:10.574897 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d05881db-8953-52a0-98ec-dd1036bee846', 'data_vg': 'ceph-d05881db-8953-52a0-98ec-dd1036bee846'})  2025-09-03 00:40:10.574912 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5', 'data_vg': 'ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'})  2025-09-03 00:40:10.574925 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:10.574938 | orchestrator | 2025-09-03 00:40:10.574951 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-03 00:40:10.574964 | orchestrator | Wednesday 03 September 2025 00:40:06 +0000 (0:00:00.151) 0:00:11.441 *** 2025-09-03 00:40:10.574977 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d05881db-8953-52a0-98ec-dd1036bee846', 'data_vg': 'ceph-d05881db-8953-52a0-98ec-dd1036bee846'}) 2025-09-03 00:40:10.574990 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5', 'data_vg': 'ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'}) 2025-09-03 00:40:10.575003 | orchestrator | 2025-09-03 00:40:10.575015 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-03 00:40:10.575026 | orchestrator | Wednesday 03 September 2025 00:40:08 +0000 (0:00:01.531) 0:00:12.973 *** 2025-09-03 00:40:10.575036 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d05881db-8953-52a0-98ec-dd1036bee846', 'data_vg': 'ceph-d05881db-8953-52a0-98ec-dd1036bee846'})  2025-09-03 00:40:10.575048 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5', 'data_vg': 'ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'})  2025-09-03 00:40:10.575059 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:10.575070 | orchestrator | 2025-09-03 00:40:10.575081 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-03 00:40:10.575092 | orchestrator | Wednesday 03 September 2025 00:40:08 +0000 (0:00:00.170) 0:00:13.143 *** 2025-09-03 00:40:10.575104 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:10.575115 | orchestrator | 2025-09-03 00:40:10.575147 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-03 00:40:10.575177 | orchestrator | Wednesday 03 September 2025 00:40:08 +0000 (0:00:00.141) 0:00:13.285 *** 2025-09-03 00:40:10.575189 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d05881db-8953-52a0-98ec-dd1036bee846', 'data_vg': 'ceph-d05881db-8953-52a0-98ec-dd1036bee846'})  2025-09-03 00:40:10.575200 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5', 'data_vg': 'ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'})  2025-09-03 00:40:10.575211 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:10.575222 | orchestrator | 2025-09-03 00:40:10.575233 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-03 00:40:10.575244 | orchestrator | Wednesday 03 September 2025 00:40:09 +0000 (0:00:00.454) 0:00:13.739 *** 2025-09-03 00:40:10.575255 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:10.575266 | orchestrator | 2025-09-03 00:40:10.575277 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-03 00:40:10.575288 | orchestrator | Wednesday 03 September 2025 00:40:09 +0000 (0:00:00.146) 0:00:13.886 *** 2025-09-03 00:40:10.575299 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d05881db-8953-52a0-98ec-dd1036bee846', 'data_vg': 'ceph-d05881db-8953-52a0-98ec-dd1036bee846'})  2025-09-03 00:40:10.575319 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5', 'data_vg': 'ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'})  2025-09-03 00:40:10.575330 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:10.575340 | orchestrator | 2025-09-03 00:40:10.575351 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-03 00:40:10.575362 | orchestrator | Wednesday 03 September 2025 00:40:09 +0000 (0:00:00.159) 0:00:14.045 *** 2025-09-03 00:40:10.575373 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:10.575384 | orchestrator | 2025-09-03 00:40:10.575395 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-03 00:40:10.575406 | orchestrator | Wednesday 03 September 2025 00:40:09 +0000 (0:00:00.147) 0:00:14.193 *** 2025-09-03 00:40:10.575417 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d05881db-8953-52a0-98ec-dd1036bee846', 'data_vg': 'ceph-d05881db-8953-52a0-98ec-dd1036bee846'})  2025-09-03 00:40:10.575428 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5', 'data_vg': 'ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'})  2025-09-03 00:40:10.575439 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:10.575450 | orchestrator | 2025-09-03 00:40:10.575461 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-03 00:40:10.575472 | orchestrator | Wednesday 03 September 2025 00:40:09 +0000 (0:00:00.148) 0:00:14.341 *** 2025-09-03 00:40:10.575483 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:40:10.575494 | orchestrator | 2025-09-03 00:40:10.575505 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-03 00:40:10.575516 | orchestrator | Wednesday 03 September 2025 00:40:09 +0000 (0:00:00.133) 0:00:14.474 *** 2025-09-03 00:40:10.575532 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d05881db-8953-52a0-98ec-dd1036bee846', 'data_vg': 'ceph-d05881db-8953-52a0-98ec-dd1036bee846'})  2025-09-03 00:40:10.575543 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5', 'data_vg': 'ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'})  2025-09-03 00:40:10.575554 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:10.575565 | orchestrator | 2025-09-03 00:40:10.575576 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-03 00:40:10.575587 | orchestrator | Wednesday 03 September 2025 00:40:09 +0000 (0:00:00.161) 0:00:14.636 *** 2025-09-03 00:40:10.575598 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d05881db-8953-52a0-98ec-dd1036bee846', 'data_vg': 'ceph-d05881db-8953-52a0-98ec-dd1036bee846'})  2025-09-03 00:40:10.575609 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5', 'data_vg': 'ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'})  2025-09-03 00:40:10.575620 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:10.575631 | orchestrator | 2025-09-03 00:40:10.575642 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-03 00:40:10.575653 | orchestrator | Wednesday 03 September 2025 00:40:10 +0000 (0:00:00.147) 0:00:14.783 *** 2025-09-03 00:40:10.575663 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d05881db-8953-52a0-98ec-dd1036bee846', 'data_vg': 'ceph-d05881db-8953-52a0-98ec-dd1036bee846'})  2025-09-03 00:40:10.575675 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5', 'data_vg': 'ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'})  2025-09-03 00:40:10.575686 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:10.575696 | orchestrator | 2025-09-03 00:40:10.575708 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-03 00:40:10.575718 | orchestrator | Wednesday 03 September 2025 00:40:10 +0000 (0:00:00.156) 0:00:14.940 *** 2025-09-03 00:40:10.575729 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:10.575747 | orchestrator | 2025-09-03 00:40:10.575758 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-03 00:40:10.575769 | orchestrator | Wednesday 03 September 2025 00:40:10 +0000 (0:00:00.129) 0:00:15.070 *** 2025-09-03 00:40:10.575780 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:10.575791 | orchestrator | 2025-09-03 00:40:10.575807 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-03 00:40:16.853098 | orchestrator | Wednesday 03 September 2025 00:40:10 +0000 (0:00:00.136) 0:00:15.207 *** 2025-09-03 00:40:16.853300 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:16.853334 | orchestrator | 2025-09-03 00:40:16.853357 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-03 00:40:16.853378 | orchestrator | Wednesday 03 September 2025 00:40:10 +0000 (0:00:00.133) 0:00:15.340 *** 2025-09-03 00:40:16.853397 | orchestrator | ok: [testbed-node-3] => { 2025-09-03 00:40:16.853418 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-03 00:40:16.853438 | orchestrator | } 2025-09-03 00:40:16.853460 | orchestrator | 2025-09-03 00:40:16.853473 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-03 00:40:16.853485 | orchestrator | Wednesday 03 September 2025 00:40:11 +0000 (0:00:00.421) 0:00:15.761 *** 2025-09-03 00:40:16.853496 | orchestrator | ok: [testbed-node-3] => { 2025-09-03 00:40:16.853508 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-03 00:40:16.853519 | orchestrator | } 2025-09-03 00:40:16.853530 | orchestrator | 2025-09-03 00:40:16.853542 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-03 00:40:16.853553 | orchestrator | Wednesday 03 September 2025 00:40:11 +0000 (0:00:00.154) 0:00:15.916 *** 2025-09-03 00:40:16.853564 | orchestrator | ok: [testbed-node-3] => { 2025-09-03 00:40:16.853575 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-03 00:40:16.853587 | orchestrator | } 2025-09-03 00:40:16.853598 | orchestrator | 2025-09-03 00:40:16.853609 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-03 00:40:16.853620 | orchestrator | Wednesday 03 September 2025 00:40:11 +0000 (0:00:00.185) 0:00:16.102 *** 2025-09-03 00:40:16.853633 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:40:16.853647 | orchestrator | 2025-09-03 00:40:16.853659 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-03 00:40:16.853672 | orchestrator | Wednesday 03 September 2025 00:40:12 +0000 (0:00:00.655) 0:00:16.757 *** 2025-09-03 00:40:16.853685 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:40:16.853698 | orchestrator | 2025-09-03 00:40:16.853712 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-03 00:40:16.853725 | orchestrator | Wednesday 03 September 2025 00:40:12 +0000 (0:00:00.540) 0:00:17.298 *** 2025-09-03 00:40:16.853737 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:40:16.853750 | orchestrator | 2025-09-03 00:40:16.853764 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-03 00:40:16.853776 | orchestrator | Wednesday 03 September 2025 00:40:13 +0000 (0:00:00.481) 0:00:17.779 *** 2025-09-03 00:40:16.853790 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:40:16.853802 | orchestrator | 2025-09-03 00:40:16.853815 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-03 00:40:16.853827 | orchestrator | Wednesday 03 September 2025 00:40:13 +0000 (0:00:00.151) 0:00:17.930 *** 2025-09-03 00:40:16.853840 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:16.853854 | orchestrator | 2025-09-03 00:40:16.853867 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-03 00:40:16.853880 | orchestrator | Wednesday 03 September 2025 00:40:13 +0000 (0:00:00.105) 0:00:18.036 *** 2025-09-03 00:40:16.853893 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:16.853905 | orchestrator | 2025-09-03 00:40:16.853918 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-03 00:40:16.853931 | orchestrator | Wednesday 03 September 2025 00:40:13 +0000 (0:00:00.117) 0:00:18.153 *** 2025-09-03 00:40:16.853943 | orchestrator | ok: [testbed-node-3] => { 2025-09-03 00:40:16.853982 | orchestrator |  "vgs_report": { 2025-09-03 00:40:16.853997 | orchestrator |  "vg": [] 2025-09-03 00:40:16.854010 | orchestrator |  } 2025-09-03 00:40:16.854082 | orchestrator | } 2025-09-03 00:40:16.854094 | orchestrator | 2025-09-03 00:40:16.854106 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-03 00:40:16.854117 | orchestrator | Wednesday 03 September 2025 00:40:13 +0000 (0:00:00.141) 0:00:18.294 *** 2025-09-03 00:40:16.854148 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:16.854160 | orchestrator | 2025-09-03 00:40:16.854171 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-03 00:40:16.854182 | orchestrator | Wednesday 03 September 2025 00:40:13 +0000 (0:00:00.128) 0:00:18.422 *** 2025-09-03 00:40:16.854193 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:16.854204 | orchestrator | 2025-09-03 00:40:16.854215 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-03 00:40:16.854226 | orchestrator | Wednesday 03 September 2025 00:40:13 +0000 (0:00:00.130) 0:00:18.552 *** 2025-09-03 00:40:16.854236 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:16.854247 | orchestrator | 2025-09-03 00:40:16.854258 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-03 00:40:16.854269 | orchestrator | Wednesday 03 September 2025 00:40:14 +0000 (0:00:00.316) 0:00:18.869 *** 2025-09-03 00:40:16.854280 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:16.854290 | orchestrator | 2025-09-03 00:40:16.854301 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-03 00:40:16.854312 | orchestrator | Wednesday 03 September 2025 00:40:14 +0000 (0:00:00.147) 0:00:19.017 *** 2025-09-03 00:40:16.854323 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:16.854334 | orchestrator | 2025-09-03 00:40:16.854361 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-03 00:40:16.854373 | orchestrator | Wednesday 03 September 2025 00:40:14 +0000 (0:00:00.131) 0:00:19.149 *** 2025-09-03 00:40:16.854384 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:16.854395 | orchestrator | 2025-09-03 00:40:16.854406 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-03 00:40:16.854417 | orchestrator | Wednesday 03 September 2025 00:40:14 +0000 (0:00:00.139) 0:00:19.288 *** 2025-09-03 00:40:16.854427 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:16.854438 | orchestrator | 2025-09-03 00:40:16.854449 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-03 00:40:16.854460 | orchestrator | Wednesday 03 September 2025 00:40:14 +0000 (0:00:00.157) 0:00:19.446 *** 2025-09-03 00:40:16.854471 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:16.854482 | orchestrator | 2025-09-03 00:40:16.854493 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-03 00:40:16.854525 | orchestrator | Wednesday 03 September 2025 00:40:14 +0000 (0:00:00.122) 0:00:19.568 *** 2025-09-03 00:40:16.854536 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:16.854547 | orchestrator | 2025-09-03 00:40:16.854559 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-03 00:40:16.854570 | orchestrator | Wednesday 03 September 2025 00:40:15 +0000 (0:00:00.131) 0:00:19.700 *** 2025-09-03 00:40:16.854582 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:16.854593 | orchestrator | 2025-09-03 00:40:16.854604 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-03 00:40:16.854615 | orchestrator | Wednesday 03 September 2025 00:40:15 +0000 (0:00:00.129) 0:00:19.830 *** 2025-09-03 00:40:16.854626 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:16.854637 | orchestrator | 2025-09-03 00:40:16.854648 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-03 00:40:16.854659 | orchestrator | Wednesday 03 September 2025 00:40:15 +0000 (0:00:00.142) 0:00:19.972 *** 2025-09-03 00:40:16.854670 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:16.854681 | orchestrator | 2025-09-03 00:40:16.854704 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-03 00:40:16.854715 | orchestrator | Wednesday 03 September 2025 00:40:15 +0000 (0:00:00.126) 0:00:20.099 *** 2025-09-03 00:40:16.854726 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:16.854737 | orchestrator | 2025-09-03 00:40:16.854748 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-03 00:40:16.854760 | orchestrator | Wednesday 03 September 2025 00:40:15 +0000 (0:00:00.125) 0:00:20.225 *** 2025-09-03 00:40:16.854771 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:16.854782 | orchestrator | 2025-09-03 00:40:16.854793 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-03 00:40:16.854804 | orchestrator | Wednesday 03 September 2025 00:40:15 +0000 (0:00:00.156) 0:00:20.381 *** 2025-09-03 00:40:16.854817 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d05881db-8953-52a0-98ec-dd1036bee846', 'data_vg': 'ceph-d05881db-8953-52a0-98ec-dd1036bee846'})  2025-09-03 00:40:16.854830 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5', 'data_vg': 'ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'})  2025-09-03 00:40:16.854841 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:16.854852 | orchestrator | 2025-09-03 00:40:16.854864 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-03 00:40:16.854875 | orchestrator | Wednesday 03 September 2025 00:40:16 +0000 (0:00:00.345) 0:00:20.727 *** 2025-09-03 00:40:16.854886 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d05881db-8953-52a0-98ec-dd1036bee846', 'data_vg': 'ceph-d05881db-8953-52a0-98ec-dd1036bee846'})  2025-09-03 00:40:16.854897 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5', 'data_vg': 'ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'})  2025-09-03 00:40:16.854909 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:16.854920 | orchestrator | 2025-09-03 00:40:16.854930 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-03 00:40:16.854941 | orchestrator | Wednesday 03 September 2025 00:40:16 +0000 (0:00:00.160) 0:00:20.887 *** 2025-09-03 00:40:16.854958 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d05881db-8953-52a0-98ec-dd1036bee846', 'data_vg': 'ceph-d05881db-8953-52a0-98ec-dd1036bee846'})  2025-09-03 00:40:16.854969 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5', 'data_vg': 'ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'})  2025-09-03 00:40:16.854980 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:16.854991 | orchestrator | 2025-09-03 00:40:16.855002 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-03 00:40:16.855013 | orchestrator | Wednesday 03 September 2025 00:40:16 +0000 (0:00:00.138) 0:00:21.026 *** 2025-09-03 00:40:16.855024 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d05881db-8953-52a0-98ec-dd1036bee846', 'data_vg': 'ceph-d05881db-8953-52a0-98ec-dd1036bee846'})  2025-09-03 00:40:16.855035 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5', 'data_vg': 'ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'})  2025-09-03 00:40:16.855047 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:16.855058 | orchestrator | 2025-09-03 00:40:16.855069 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-03 00:40:16.855079 | orchestrator | Wednesday 03 September 2025 00:40:16 +0000 (0:00:00.155) 0:00:21.182 *** 2025-09-03 00:40:16.855090 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d05881db-8953-52a0-98ec-dd1036bee846', 'data_vg': 'ceph-d05881db-8953-52a0-98ec-dd1036bee846'})  2025-09-03 00:40:16.855102 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5', 'data_vg': 'ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'})  2025-09-03 00:40:16.855113 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:16.855147 | orchestrator | 2025-09-03 00:40:16.855159 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-03 00:40:16.855170 | orchestrator | Wednesday 03 September 2025 00:40:16 +0000 (0:00:00.160) 0:00:21.343 *** 2025-09-03 00:40:16.855181 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d05881db-8953-52a0-98ec-dd1036bee846', 'data_vg': 'ceph-d05881db-8953-52a0-98ec-dd1036bee846'})  2025-09-03 00:40:16.855199 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5', 'data_vg': 'ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'})  2025-09-03 00:40:22.421182 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:22.421301 | orchestrator | 2025-09-03 00:40:22.421319 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-03 00:40:22.421332 | orchestrator | Wednesday 03 September 2025 00:40:16 +0000 (0:00:00.144) 0:00:21.487 *** 2025-09-03 00:40:22.421345 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d05881db-8953-52a0-98ec-dd1036bee846', 'data_vg': 'ceph-d05881db-8953-52a0-98ec-dd1036bee846'})  2025-09-03 00:40:22.421358 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5', 'data_vg': 'ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'})  2025-09-03 00:40:22.421369 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:22.421380 | orchestrator | 2025-09-03 00:40:22.421392 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-03 00:40:22.421403 | orchestrator | Wednesday 03 September 2025 00:40:16 +0000 (0:00:00.146) 0:00:21.634 *** 2025-09-03 00:40:22.421414 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d05881db-8953-52a0-98ec-dd1036bee846', 'data_vg': 'ceph-d05881db-8953-52a0-98ec-dd1036bee846'})  2025-09-03 00:40:22.421425 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5', 'data_vg': 'ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'})  2025-09-03 00:40:22.421437 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:22.421448 | orchestrator | 2025-09-03 00:40:22.421459 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-03 00:40:22.421471 | orchestrator | Wednesday 03 September 2025 00:40:17 +0000 (0:00:00.135) 0:00:21.769 *** 2025-09-03 00:40:22.421482 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:40:22.421494 | orchestrator | 2025-09-03 00:40:22.421505 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-03 00:40:22.421516 | orchestrator | Wednesday 03 September 2025 00:40:17 +0000 (0:00:00.571) 0:00:22.340 *** 2025-09-03 00:40:22.421527 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:40:22.421538 | orchestrator | 2025-09-03 00:40:22.421548 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-03 00:40:22.421559 | orchestrator | Wednesday 03 September 2025 00:40:18 +0000 (0:00:00.532) 0:00:22.873 *** 2025-09-03 00:40:22.421570 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:40:22.421581 | orchestrator | 2025-09-03 00:40:22.421592 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-03 00:40:22.421603 | orchestrator | Wednesday 03 September 2025 00:40:18 +0000 (0:00:00.143) 0:00:23.017 *** 2025-09-03 00:40:22.421614 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5', 'vg_name': 'ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'}) 2025-09-03 00:40:22.421626 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-d05881db-8953-52a0-98ec-dd1036bee846', 'vg_name': 'ceph-d05881db-8953-52a0-98ec-dd1036bee846'}) 2025-09-03 00:40:22.421637 | orchestrator | 2025-09-03 00:40:22.421649 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-03 00:40:22.421663 | orchestrator | Wednesday 03 September 2025 00:40:18 +0000 (0:00:00.180) 0:00:23.197 *** 2025-09-03 00:40:22.421676 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d05881db-8953-52a0-98ec-dd1036bee846', 'data_vg': 'ceph-d05881db-8953-52a0-98ec-dd1036bee846'})  2025-09-03 00:40:22.421717 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5', 'data_vg': 'ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'})  2025-09-03 00:40:22.421730 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:22.421742 | orchestrator | 2025-09-03 00:40:22.421755 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-03 00:40:22.421768 | orchestrator | Wednesday 03 September 2025 00:40:18 +0000 (0:00:00.361) 0:00:23.559 *** 2025-09-03 00:40:22.421781 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d05881db-8953-52a0-98ec-dd1036bee846', 'data_vg': 'ceph-d05881db-8953-52a0-98ec-dd1036bee846'})  2025-09-03 00:40:22.421793 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5', 'data_vg': 'ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'})  2025-09-03 00:40:22.421807 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:22.421819 | orchestrator | 2025-09-03 00:40:22.421832 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-03 00:40:22.421845 | orchestrator | Wednesday 03 September 2025 00:40:19 +0000 (0:00:00.142) 0:00:23.701 *** 2025-09-03 00:40:22.421858 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d05881db-8953-52a0-98ec-dd1036bee846', 'data_vg': 'ceph-d05881db-8953-52a0-98ec-dd1036bee846'})  2025-09-03 00:40:22.421871 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5', 'data_vg': 'ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'})  2025-09-03 00:40:22.421885 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:40:22.421896 | orchestrator | 2025-09-03 00:40:22.421907 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-03 00:40:22.421918 | orchestrator | Wednesday 03 September 2025 00:40:19 +0000 (0:00:00.189) 0:00:23.891 *** 2025-09-03 00:40:22.421928 | orchestrator | ok: [testbed-node-3] => { 2025-09-03 00:40:22.421940 | orchestrator |  "lvm_report": { 2025-09-03 00:40:22.421952 | orchestrator |  "lv": [ 2025-09-03 00:40:22.421963 | orchestrator |  { 2025-09-03 00:40:22.421991 | orchestrator |  "lv_name": "osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5", 2025-09-03 00:40:22.422003 | orchestrator |  "vg_name": "ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5" 2025-09-03 00:40:22.422076 | orchestrator |  }, 2025-09-03 00:40:22.422091 | orchestrator |  { 2025-09-03 00:40:22.422103 | orchestrator |  "lv_name": "osd-block-d05881db-8953-52a0-98ec-dd1036bee846", 2025-09-03 00:40:22.422114 | orchestrator |  "vg_name": "ceph-d05881db-8953-52a0-98ec-dd1036bee846" 2025-09-03 00:40:22.422165 | orchestrator |  } 2025-09-03 00:40:22.422177 | orchestrator |  ], 2025-09-03 00:40:22.422188 | orchestrator |  "pv": [ 2025-09-03 00:40:22.422199 | orchestrator |  { 2025-09-03 00:40:22.422210 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-03 00:40:22.422221 | orchestrator |  "vg_name": "ceph-d05881db-8953-52a0-98ec-dd1036bee846" 2025-09-03 00:40:22.422231 | orchestrator |  }, 2025-09-03 00:40:22.422242 | orchestrator |  { 2025-09-03 00:40:22.422253 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-03 00:40:22.422263 | orchestrator |  "vg_name": "ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5" 2025-09-03 00:40:22.422274 | orchestrator |  } 2025-09-03 00:40:22.422284 | orchestrator |  ] 2025-09-03 00:40:22.422295 | orchestrator |  } 2025-09-03 00:40:22.422307 | orchestrator | } 2025-09-03 00:40:22.422318 | orchestrator | 2025-09-03 00:40:22.422329 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-03 00:40:22.422340 | orchestrator | 2025-09-03 00:40:22.422350 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-03 00:40:22.422361 | orchestrator | Wednesday 03 September 2025 00:40:19 +0000 (0:00:00.309) 0:00:24.200 *** 2025-09-03 00:40:22.422372 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-03 00:40:22.422393 | orchestrator | 2025-09-03 00:40:22.422404 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-03 00:40:22.422415 | orchestrator | Wednesday 03 September 2025 00:40:19 +0000 (0:00:00.342) 0:00:24.542 *** 2025-09-03 00:40:22.422426 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:40:22.422437 | orchestrator | 2025-09-03 00:40:22.422448 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:22.422459 | orchestrator | Wednesday 03 September 2025 00:40:20 +0000 (0:00:00.275) 0:00:24.818 *** 2025-09-03 00:40:22.422488 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-03 00:40:22.422499 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-03 00:40:22.422510 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-03 00:40:22.422521 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-03 00:40:22.422532 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-03 00:40:22.422543 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-03 00:40:22.422554 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-03 00:40:22.422569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-03 00:40:22.422580 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-03 00:40:22.422591 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-03 00:40:22.422603 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-03 00:40:22.422614 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-03 00:40:22.422625 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-03 00:40:22.422635 | orchestrator | 2025-09-03 00:40:22.422646 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:22.422657 | orchestrator | Wednesday 03 September 2025 00:40:20 +0000 (0:00:00.456) 0:00:25.274 *** 2025-09-03 00:40:22.422668 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:22.422679 | orchestrator | 2025-09-03 00:40:22.422690 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:22.422701 | orchestrator | Wednesday 03 September 2025 00:40:20 +0000 (0:00:00.206) 0:00:25.480 *** 2025-09-03 00:40:22.422711 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:22.422722 | orchestrator | 2025-09-03 00:40:22.422733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:22.422744 | orchestrator | Wednesday 03 September 2025 00:40:21 +0000 (0:00:00.190) 0:00:25.671 *** 2025-09-03 00:40:22.422755 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:22.422765 | orchestrator | 2025-09-03 00:40:22.422776 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:22.422787 | orchestrator | Wednesday 03 September 2025 00:40:21 +0000 (0:00:00.579) 0:00:26.250 *** 2025-09-03 00:40:22.422797 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:22.422808 | orchestrator | 2025-09-03 00:40:22.422819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:22.422830 | orchestrator | Wednesday 03 September 2025 00:40:21 +0000 (0:00:00.191) 0:00:26.441 *** 2025-09-03 00:40:22.422841 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:22.422852 | orchestrator | 2025-09-03 00:40:22.422863 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:22.422874 | orchestrator | Wednesday 03 September 2025 00:40:21 +0000 (0:00:00.195) 0:00:26.637 *** 2025-09-03 00:40:22.422884 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:22.422895 | orchestrator | 2025-09-03 00:40:22.422914 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:22.422925 | orchestrator | Wednesday 03 September 2025 00:40:22 +0000 (0:00:00.204) 0:00:26.841 *** 2025-09-03 00:40:22.422936 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:22.422947 | orchestrator | 2025-09-03 00:40:22.422967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:32.776026 | orchestrator | Wednesday 03 September 2025 00:40:22 +0000 (0:00:00.210) 0:00:27.051 *** 2025-09-03 00:40:32.776858 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:32.776892 | orchestrator | 2025-09-03 00:40:32.776907 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:32.776918 | orchestrator | Wednesday 03 September 2025 00:40:22 +0000 (0:00:00.206) 0:00:27.258 *** 2025-09-03 00:40:32.776928 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae) 2025-09-03 00:40:32.776940 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae) 2025-09-03 00:40:32.776950 | orchestrator | 2025-09-03 00:40:32.776960 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:32.776970 | orchestrator | Wednesday 03 September 2025 00:40:23 +0000 (0:00:00.433) 0:00:27.692 *** 2025-09-03 00:40:32.776980 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f4ffaa61-7d7a-4b4d-ae66-bf9c1470deb3) 2025-09-03 00:40:32.776990 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f4ffaa61-7d7a-4b4d-ae66-bf9c1470deb3) 2025-09-03 00:40:32.777000 | orchestrator | 2025-09-03 00:40:32.777010 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:32.777020 | orchestrator | Wednesday 03 September 2025 00:40:23 +0000 (0:00:00.434) 0:00:28.126 *** 2025-09-03 00:40:32.777029 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_89937d38-622a-4519-a70d-71f9b6cc380e) 2025-09-03 00:40:32.777039 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_89937d38-622a-4519-a70d-71f9b6cc380e) 2025-09-03 00:40:32.777049 | orchestrator | 2025-09-03 00:40:32.777059 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:32.777068 | orchestrator | Wednesday 03 September 2025 00:40:24 +0000 (0:00:00.542) 0:00:28.669 *** 2025-09-03 00:40:32.777078 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2aa4af3c-ac98-453f-b557-6d0c203c4201) 2025-09-03 00:40:32.777088 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2aa4af3c-ac98-453f-b557-6d0c203c4201) 2025-09-03 00:40:32.777098 | orchestrator | 2025-09-03 00:40:32.777107 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:32.777141 | orchestrator | Wednesday 03 September 2025 00:40:24 +0000 (0:00:00.435) 0:00:29.105 *** 2025-09-03 00:40:32.777152 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-03 00:40:32.777162 | orchestrator | 2025-09-03 00:40:32.777172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:32.777182 | orchestrator | Wednesday 03 September 2025 00:40:24 +0000 (0:00:00.435) 0:00:29.541 *** 2025-09-03 00:40:32.777192 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-03 00:40:32.777221 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-03 00:40:32.777231 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-03 00:40:32.777241 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-03 00:40:32.777251 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-03 00:40:32.777260 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-03 00:40:32.777270 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-03 00:40:32.777301 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-03 00:40:32.777311 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-03 00:40:32.777321 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-03 00:40:32.777331 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-03 00:40:32.777340 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-03 00:40:32.777350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-03 00:40:32.777359 | orchestrator | 2025-09-03 00:40:32.777369 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:32.777378 | orchestrator | Wednesday 03 September 2025 00:40:25 +0000 (0:00:00.615) 0:00:30.156 *** 2025-09-03 00:40:32.777388 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:32.777398 | orchestrator | 2025-09-03 00:40:32.777408 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:32.777418 | orchestrator | Wednesday 03 September 2025 00:40:25 +0000 (0:00:00.243) 0:00:30.400 *** 2025-09-03 00:40:32.777427 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:32.777437 | orchestrator | 2025-09-03 00:40:32.777447 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:32.777457 | orchestrator | Wednesday 03 September 2025 00:40:25 +0000 (0:00:00.187) 0:00:30.587 *** 2025-09-03 00:40:32.777467 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:32.777477 | orchestrator | 2025-09-03 00:40:32.777486 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:32.777496 | orchestrator | Wednesday 03 September 2025 00:40:26 +0000 (0:00:00.180) 0:00:30.767 *** 2025-09-03 00:40:32.777506 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:32.777516 | orchestrator | 2025-09-03 00:40:32.777543 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:32.777554 | orchestrator | Wednesday 03 September 2025 00:40:26 +0000 (0:00:00.247) 0:00:31.015 *** 2025-09-03 00:40:32.777564 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:32.777574 | orchestrator | 2025-09-03 00:40:32.777583 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:32.777593 | orchestrator | Wednesday 03 September 2025 00:40:26 +0000 (0:00:00.185) 0:00:31.201 *** 2025-09-03 00:40:32.777603 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:32.777612 | orchestrator | 2025-09-03 00:40:32.777622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:32.777632 | orchestrator | Wednesday 03 September 2025 00:40:26 +0000 (0:00:00.197) 0:00:31.399 *** 2025-09-03 00:40:32.777642 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:32.777651 | orchestrator | 2025-09-03 00:40:32.777661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:32.777671 | orchestrator | Wednesday 03 September 2025 00:40:26 +0000 (0:00:00.181) 0:00:31.580 *** 2025-09-03 00:40:32.777681 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:32.777690 | orchestrator | 2025-09-03 00:40:32.777700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:32.777710 | orchestrator | Wednesday 03 September 2025 00:40:27 +0000 (0:00:00.195) 0:00:31.776 *** 2025-09-03 00:40:32.777720 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-03 00:40:32.777730 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-03 00:40:32.777740 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-03 00:40:32.777750 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-03 00:40:32.777760 | orchestrator | 2025-09-03 00:40:32.777770 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:32.777780 | orchestrator | Wednesday 03 September 2025 00:40:27 +0000 (0:00:00.829) 0:00:32.606 *** 2025-09-03 00:40:32.777798 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:32.777808 | orchestrator | 2025-09-03 00:40:32.777818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:32.777828 | orchestrator | Wednesday 03 September 2025 00:40:28 +0000 (0:00:00.191) 0:00:32.797 *** 2025-09-03 00:40:32.777838 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:32.777848 | orchestrator | 2025-09-03 00:40:32.777858 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:32.777867 | orchestrator | Wednesday 03 September 2025 00:40:28 +0000 (0:00:00.180) 0:00:32.978 *** 2025-09-03 00:40:32.777877 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:32.777887 | orchestrator | 2025-09-03 00:40:32.777897 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:32.777906 | orchestrator | Wednesday 03 September 2025 00:40:28 +0000 (0:00:00.643) 0:00:33.622 *** 2025-09-03 00:40:32.777916 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:32.777926 | orchestrator | 2025-09-03 00:40:32.777936 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-03 00:40:32.777946 | orchestrator | Wednesday 03 September 2025 00:40:29 +0000 (0:00:00.193) 0:00:33.815 *** 2025-09-03 00:40:32.777956 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:32.777965 | orchestrator | 2025-09-03 00:40:32.777975 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-03 00:40:32.777985 | orchestrator | Wednesday 03 September 2025 00:40:29 +0000 (0:00:00.138) 0:00:33.953 *** 2025-09-03 00:40:32.777995 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '400ae980-4c36-5b9b-960d-631158f9c2c9'}}) 2025-09-03 00:40:32.778005 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1107a6cb-8e5a-5215-8b60-1d473d685075'}}) 2025-09-03 00:40:32.778064 | orchestrator | 2025-09-03 00:40:32.778077 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-03 00:40:32.778087 | orchestrator | Wednesday 03 September 2025 00:40:29 +0000 (0:00:00.179) 0:00:34.133 *** 2025-09-03 00:40:32.778098 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9', 'data_vg': 'ceph-400ae980-4c36-5b9b-960d-631158f9c2c9'}) 2025-09-03 00:40:32.778207 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075', 'data_vg': 'ceph-1107a6cb-8e5a-5215-8b60-1d473d685075'}) 2025-09-03 00:40:32.778220 | orchestrator | 2025-09-03 00:40:32.778230 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-03 00:40:32.778240 | orchestrator | Wednesday 03 September 2025 00:40:31 +0000 (0:00:01.840) 0:00:35.973 *** 2025-09-03 00:40:32.778250 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9', 'data_vg': 'ceph-400ae980-4c36-5b9b-960d-631158f9c2c9'})  2025-09-03 00:40:32.778262 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075', 'data_vg': 'ceph-1107a6cb-8e5a-5215-8b60-1d473d685075'})  2025-09-03 00:40:32.778272 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:32.778282 | orchestrator | 2025-09-03 00:40:32.778291 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-03 00:40:32.778301 | orchestrator | Wednesday 03 September 2025 00:40:31 +0000 (0:00:00.141) 0:00:36.115 *** 2025-09-03 00:40:32.778311 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9', 'data_vg': 'ceph-400ae980-4c36-5b9b-960d-631158f9c2c9'}) 2025-09-03 00:40:32.778321 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075', 'data_vg': 'ceph-1107a6cb-8e5a-5215-8b60-1d473d685075'}) 2025-09-03 00:40:32.778330 | orchestrator | 2025-09-03 00:40:32.778349 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-03 00:40:37.728267 | orchestrator | Wednesday 03 September 2025 00:40:32 +0000 (0:00:01.289) 0:00:37.404 *** 2025-09-03 00:40:37.728410 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9', 'data_vg': 'ceph-400ae980-4c36-5b9b-960d-631158f9c2c9'})  2025-09-03 00:40:37.728428 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075', 'data_vg': 'ceph-1107a6cb-8e5a-5215-8b60-1d473d685075'})  2025-09-03 00:40:37.728440 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:37.728454 | orchestrator | 2025-09-03 00:40:37.728466 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-03 00:40:37.728477 | orchestrator | Wednesday 03 September 2025 00:40:32 +0000 (0:00:00.132) 0:00:37.537 *** 2025-09-03 00:40:37.728488 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:37.728499 | orchestrator | 2025-09-03 00:40:37.728510 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-03 00:40:37.728522 | orchestrator | Wednesday 03 September 2025 00:40:33 +0000 (0:00:00.113) 0:00:37.650 *** 2025-09-03 00:40:37.728533 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9', 'data_vg': 'ceph-400ae980-4c36-5b9b-960d-631158f9c2c9'})  2025-09-03 00:40:37.728561 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075', 'data_vg': 'ceph-1107a6cb-8e5a-5215-8b60-1d473d685075'})  2025-09-03 00:40:37.728573 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:37.728584 | orchestrator | 2025-09-03 00:40:37.728595 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-03 00:40:37.728606 | orchestrator | Wednesday 03 September 2025 00:40:33 +0000 (0:00:00.139) 0:00:37.790 *** 2025-09-03 00:40:37.728617 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:37.728627 | orchestrator | 2025-09-03 00:40:37.728638 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-03 00:40:37.728649 | orchestrator | Wednesday 03 September 2025 00:40:33 +0000 (0:00:00.126) 0:00:37.917 *** 2025-09-03 00:40:37.728660 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9', 'data_vg': 'ceph-400ae980-4c36-5b9b-960d-631158f9c2c9'})  2025-09-03 00:40:37.728672 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075', 'data_vg': 'ceph-1107a6cb-8e5a-5215-8b60-1d473d685075'})  2025-09-03 00:40:37.728683 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:37.728694 | orchestrator | 2025-09-03 00:40:37.728705 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-03 00:40:37.728716 | orchestrator | Wednesday 03 September 2025 00:40:33 +0000 (0:00:00.128) 0:00:38.046 *** 2025-09-03 00:40:37.728734 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:37.728745 | orchestrator | 2025-09-03 00:40:37.728756 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-03 00:40:37.728766 | orchestrator | Wednesday 03 September 2025 00:40:33 +0000 (0:00:00.239) 0:00:38.285 *** 2025-09-03 00:40:37.728777 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9', 'data_vg': 'ceph-400ae980-4c36-5b9b-960d-631158f9c2c9'})  2025-09-03 00:40:37.728789 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075', 'data_vg': 'ceph-1107a6cb-8e5a-5215-8b60-1d473d685075'})  2025-09-03 00:40:37.728800 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:37.728810 | orchestrator | 2025-09-03 00:40:37.728821 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-03 00:40:37.728832 | orchestrator | Wednesday 03 September 2025 00:40:33 +0000 (0:00:00.130) 0:00:38.415 *** 2025-09-03 00:40:37.728843 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:40:37.728854 | orchestrator | 2025-09-03 00:40:37.728865 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-03 00:40:37.728876 | orchestrator | Wednesday 03 September 2025 00:40:33 +0000 (0:00:00.127) 0:00:38.543 *** 2025-09-03 00:40:37.728894 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9', 'data_vg': 'ceph-400ae980-4c36-5b9b-960d-631158f9c2c9'})  2025-09-03 00:40:37.728906 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075', 'data_vg': 'ceph-1107a6cb-8e5a-5215-8b60-1d473d685075'})  2025-09-03 00:40:37.728917 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:37.728928 | orchestrator | 2025-09-03 00:40:37.728939 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-03 00:40:37.728950 | orchestrator | Wednesday 03 September 2025 00:40:34 +0000 (0:00:00.148) 0:00:38.692 *** 2025-09-03 00:40:37.728961 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9', 'data_vg': 'ceph-400ae980-4c36-5b9b-960d-631158f9c2c9'})  2025-09-03 00:40:37.728971 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075', 'data_vg': 'ceph-1107a6cb-8e5a-5215-8b60-1d473d685075'})  2025-09-03 00:40:37.728982 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:37.728993 | orchestrator | 2025-09-03 00:40:37.729004 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-03 00:40:37.729015 | orchestrator | Wednesday 03 September 2025 00:40:34 +0000 (0:00:00.143) 0:00:38.835 *** 2025-09-03 00:40:37.729042 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9', 'data_vg': 'ceph-400ae980-4c36-5b9b-960d-631158f9c2c9'})  2025-09-03 00:40:37.729055 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075', 'data_vg': 'ceph-1107a6cb-8e5a-5215-8b60-1d473d685075'})  2025-09-03 00:40:37.729066 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:37.729077 | orchestrator | 2025-09-03 00:40:37.729088 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-03 00:40:37.729098 | orchestrator | Wednesday 03 September 2025 00:40:34 +0000 (0:00:00.129) 0:00:38.965 *** 2025-09-03 00:40:37.729109 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:37.729141 | orchestrator | 2025-09-03 00:40:37.729153 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-03 00:40:37.729164 | orchestrator | Wednesday 03 September 2025 00:40:34 +0000 (0:00:00.121) 0:00:39.086 *** 2025-09-03 00:40:37.729174 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:37.729185 | orchestrator | 2025-09-03 00:40:37.729196 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-03 00:40:37.729207 | orchestrator | Wednesday 03 September 2025 00:40:34 +0000 (0:00:00.108) 0:00:39.195 *** 2025-09-03 00:40:37.729218 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:37.729229 | orchestrator | 2025-09-03 00:40:37.729240 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-03 00:40:37.729251 | orchestrator | Wednesday 03 September 2025 00:40:34 +0000 (0:00:00.130) 0:00:39.325 *** 2025-09-03 00:40:37.729262 | orchestrator | ok: [testbed-node-4] => { 2025-09-03 00:40:37.729273 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-03 00:40:37.729285 | orchestrator | } 2025-09-03 00:40:37.729297 | orchestrator | 2025-09-03 00:40:37.729308 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-03 00:40:37.729319 | orchestrator | Wednesday 03 September 2025 00:40:34 +0000 (0:00:00.130) 0:00:39.456 *** 2025-09-03 00:40:37.729330 | orchestrator | ok: [testbed-node-4] => { 2025-09-03 00:40:37.729341 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-03 00:40:37.729352 | orchestrator | } 2025-09-03 00:40:37.729363 | orchestrator | 2025-09-03 00:40:37.729373 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-03 00:40:37.729384 | orchestrator | Wednesday 03 September 2025 00:40:34 +0000 (0:00:00.120) 0:00:39.577 *** 2025-09-03 00:40:37.729395 | orchestrator | ok: [testbed-node-4] => { 2025-09-03 00:40:37.729407 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-03 00:40:37.729425 | orchestrator | } 2025-09-03 00:40:37.729437 | orchestrator | 2025-09-03 00:40:37.729448 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-03 00:40:37.729459 | orchestrator | Wednesday 03 September 2025 00:40:35 +0000 (0:00:00.118) 0:00:39.695 *** 2025-09-03 00:40:37.729470 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:40:37.729481 | orchestrator | 2025-09-03 00:40:37.729492 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-03 00:40:37.729503 | orchestrator | Wednesday 03 September 2025 00:40:35 +0000 (0:00:00.598) 0:00:40.293 *** 2025-09-03 00:40:37.729519 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:40:37.729530 | orchestrator | 2025-09-03 00:40:37.729541 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-03 00:40:37.729552 | orchestrator | Wednesday 03 September 2025 00:40:36 +0000 (0:00:00.513) 0:00:40.807 *** 2025-09-03 00:40:37.729563 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:40:37.729574 | orchestrator | 2025-09-03 00:40:37.729585 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-03 00:40:37.729596 | orchestrator | Wednesday 03 September 2025 00:40:36 +0000 (0:00:00.531) 0:00:41.338 *** 2025-09-03 00:40:37.729607 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:40:37.729618 | orchestrator | 2025-09-03 00:40:37.729629 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-03 00:40:37.729639 | orchestrator | Wednesday 03 September 2025 00:40:36 +0000 (0:00:00.143) 0:00:41.481 *** 2025-09-03 00:40:37.729650 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:37.729661 | orchestrator | 2025-09-03 00:40:37.729672 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-03 00:40:37.729683 | orchestrator | Wednesday 03 September 2025 00:40:36 +0000 (0:00:00.103) 0:00:41.585 *** 2025-09-03 00:40:37.729693 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:37.729704 | orchestrator | 2025-09-03 00:40:37.729715 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-03 00:40:37.729726 | orchestrator | Wednesday 03 September 2025 00:40:37 +0000 (0:00:00.099) 0:00:41.684 *** 2025-09-03 00:40:37.729736 | orchestrator | ok: [testbed-node-4] => { 2025-09-03 00:40:37.729748 | orchestrator |  "vgs_report": { 2025-09-03 00:40:37.729759 | orchestrator |  "vg": [] 2025-09-03 00:40:37.729771 | orchestrator |  } 2025-09-03 00:40:37.729782 | orchestrator | } 2025-09-03 00:40:37.729793 | orchestrator | 2025-09-03 00:40:37.729804 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-03 00:40:37.729816 | orchestrator | Wednesday 03 September 2025 00:40:37 +0000 (0:00:00.138) 0:00:41.822 *** 2025-09-03 00:40:37.729827 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:37.729838 | orchestrator | 2025-09-03 00:40:37.729849 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-03 00:40:37.729859 | orchestrator | Wednesday 03 September 2025 00:40:37 +0000 (0:00:00.125) 0:00:41.948 *** 2025-09-03 00:40:37.729871 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:37.729882 | orchestrator | 2025-09-03 00:40:37.729892 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-03 00:40:37.729903 | orchestrator | Wednesday 03 September 2025 00:40:37 +0000 (0:00:00.140) 0:00:42.089 *** 2025-09-03 00:40:37.729914 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:37.729925 | orchestrator | 2025-09-03 00:40:37.729936 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-03 00:40:37.729947 | orchestrator | Wednesday 03 September 2025 00:40:37 +0000 (0:00:00.135) 0:00:42.225 *** 2025-09-03 00:40:37.729959 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:37.729969 | orchestrator | 2025-09-03 00:40:37.729981 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-03 00:40:37.729998 | orchestrator | Wednesday 03 September 2025 00:40:37 +0000 (0:00:00.135) 0:00:42.361 *** 2025-09-03 00:40:42.155494 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:42.155613 | orchestrator | 2025-09-03 00:40:42.155659 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-03 00:40:42.155673 | orchestrator | Wednesday 03 September 2025 00:40:37 +0000 (0:00:00.119) 0:00:42.480 *** 2025-09-03 00:40:42.155685 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:42.155696 | orchestrator | 2025-09-03 00:40:42.155708 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-03 00:40:42.155719 | orchestrator | Wednesday 03 September 2025 00:40:38 +0000 (0:00:00.329) 0:00:42.810 *** 2025-09-03 00:40:42.155729 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:42.155740 | orchestrator | 2025-09-03 00:40:42.155752 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-03 00:40:42.155762 | orchestrator | Wednesday 03 September 2025 00:40:38 +0000 (0:00:00.131) 0:00:42.941 *** 2025-09-03 00:40:42.155773 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:42.155784 | orchestrator | 2025-09-03 00:40:42.155795 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-03 00:40:42.155806 | orchestrator | Wednesday 03 September 2025 00:40:38 +0000 (0:00:00.135) 0:00:43.076 *** 2025-09-03 00:40:42.155817 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:42.155827 | orchestrator | 2025-09-03 00:40:42.155838 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-03 00:40:42.155849 | orchestrator | Wednesday 03 September 2025 00:40:38 +0000 (0:00:00.127) 0:00:43.204 *** 2025-09-03 00:40:42.155859 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:42.155870 | orchestrator | 2025-09-03 00:40:42.155881 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-03 00:40:42.155891 | orchestrator | Wednesday 03 September 2025 00:40:38 +0000 (0:00:00.138) 0:00:43.343 *** 2025-09-03 00:40:42.155902 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:42.155912 | orchestrator | 2025-09-03 00:40:42.155923 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-03 00:40:42.155934 | orchestrator | Wednesday 03 September 2025 00:40:38 +0000 (0:00:00.127) 0:00:43.470 *** 2025-09-03 00:40:42.155945 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:42.155956 | orchestrator | 2025-09-03 00:40:42.155966 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-03 00:40:42.155977 | orchestrator | Wednesday 03 September 2025 00:40:38 +0000 (0:00:00.131) 0:00:43.602 *** 2025-09-03 00:40:42.155988 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:42.155998 | orchestrator | 2025-09-03 00:40:42.156009 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-03 00:40:42.156023 | orchestrator | Wednesday 03 September 2025 00:40:39 +0000 (0:00:00.124) 0:00:43.727 *** 2025-09-03 00:40:42.156035 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:42.156048 | orchestrator | 2025-09-03 00:40:42.156061 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-03 00:40:42.156073 | orchestrator | Wednesday 03 September 2025 00:40:39 +0000 (0:00:00.132) 0:00:43.859 *** 2025-09-03 00:40:42.156101 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9', 'data_vg': 'ceph-400ae980-4c36-5b9b-960d-631158f9c2c9'})  2025-09-03 00:40:42.156140 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075', 'data_vg': 'ceph-1107a6cb-8e5a-5215-8b60-1d473d685075'})  2025-09-03 00:40:42.156154 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:42.156167 | orchestrator | 2025-09-03 00:40:42.156179 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-03 00:40:42.156192 | orchestrator | Wednesday 03 September 2025 00:40:39 +0000 (0:00:00.158) 0:00:44.018 *** 2025-09-03 00:40:42.156205 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9', 'data_vg': 'ceph-400ae980-4c36-5b9b-960d-631158f9c2c9'})  2025-09-03 00:40:42.156217 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075', 'data_vg': 'ceph-1107a6cb-8e5a-5215-8b60-1d473d685075'})  2025-09-03 00:40:42.156238 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:42.156251 | orchestrator | 2025-09-03 00:40:42.156264 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-03 00:40:42.156276 | orchestrator | Wednesday 03 September 2025 00:40:39 +0000 (0:00:00.154) 0:00:44.173 *** 2025-09-03 00:40:42.156289 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9', 'data_vg': 'ceph-400ae980-4c36-5b9b-960d-631158f9c2c9'})  2025-09-03 00:40:42.156302 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075', 'data_vg': 'ceph-1107a6cb-8e5a-5215-8b60-1d473d685075'})  2025-09-03 00:40:42.156315 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:42.156328 | orchestrator | 2025-09-03 00:40:42.156340 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-03 00:40:42.156353 | orchestrator | Wednesday 03 September 2025 00:40:39 +0000 (0:00:00.154) 0:00:44.327 *** 2025-09-03 00:40:42.156367 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9', 'data_vg': 'ceph-400ae980-4c36-5b9b-960d-631158f9c2c9'})  2025-09-03 00:40:42.156378 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075', 'data_vg': 'ceph-1107a6cb-8e5a-5215-8b60-1d473d685075'})  2025-09-03 00:40:42.156389 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:42.156400 | orchestrator | 2025-09-03 00:40:42.156411 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-03 00:40:42.156439 | orchestrator | Wednesday 03 September 2025 00:40:40 +0000 (0:00:00.349) 0:00:44.677 *** 2025-09-03 00:40:42.156451 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9', 'data_vg': 'ceph-400ae980-4c36-5b9b-960d-631158f9c2c9'})  2025-09-03 00:40:42.156463 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075', 'data_vg': 'ceph-1107a6cb-8e5a-5215-8b60-1d473d685075'})  2025-09-03 00:40:42.156474 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:42.156484 | orchestrator | 2025-09-03 00:40:42.156495 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-03 00:40:42.156506 | orchestrator | Wednesday 03 September 2025 00:40:40 +0000 (0:00:00.151) 0:00:44.828 *** 2025-09-03 00:40:42.156517 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9', 'data_vg': 'ceph-400ae980-4c36-5b9b-960d-631158f9c2c9'})  2025-09-03 00:40:42.156528 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075', 'data_vg': 'ceph-1107a6cb-8e5a-5215-8b60-1d473d685075'})  2025-09-03 00:40:42.156539 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:42.156550 | orchestrator | 2025-09-03 00:40:42.156561 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-03 00:40:42.156572 | orchestrator | Wednesday 03 September 2025 00:40:40 +0000 (0:00:00.148) 0:00:44.977 *** 2025-09-03 00:40:42.156582 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9', 'data_vg': 'ceph-400ae980-4c36-5b9b-960d-631158f9c2c9'})  2025-09-03 00:40:42.156593 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075', 'data_vg': 'ceph-1107a6cb-8e5a-5215-8b60-1d473d685075'})  2025-09-03 00:40:42.156604 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:42.156615 | orchestrator | 2025-09-03 00:40:42.156626 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-03 00:40:42.156636 | orchestrator | Wednesday 03 September 2025 00:40:40 +0000 (0:00:00.148) 0:00:45.125 *** 2025-09-03 00:40:42.156647 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9', 'data_vg': 'ceph-400ae980-4c36-5b9b-960d-631158f9c2c9'})  2025-09-03 00:40:42.156665 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075', 'data_vg': 'ceph-1107a6cb-8e5a-5215-8b60-1d473d685075'})  2025-09-03 00:40:42.156676 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:42.156687 | orchestrator | 2025-09-03 00:40:42.156699 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-03 00:40:42.156748 | orchestrator | Wednesday 03 September 2025 00:40:40 +0000 (0:00:00.144) 0:00:45.270 *** 2025-09-03 00:40:42.156760 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:40:42.156771 | orchestrator | 2025-09-03 00:40:42.156782 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-03 00:40:42.156793 | orchestrator | Wednesday 03 September 2025 00:40:41 +0000 (0:00:00.526) 0:00:45.797 *** 2025-09-03 00:40:42.156804 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:40:42.156815 | orchestrator | 2025-09-03 00:40:42.156826 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-03 00:40:42.156836 | orchestrator | Wednesday 03 September 2025 00:40:41 +0000 (0:00:00.483) 0:00:46.280 *** 2025-09-03 00:40:42.156847 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:40:42.156858 | orchestrator | 2025-09-03 00:40:42.156869 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-03 00:40:42.156880 | orchestrator | Wednesday 03 September 2025 00:40:41 +0000 (0:00:00.112) 0:00:46.392 *** 2025-09-03 00:40:42.156891 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075', 'vg_name': 'ceph-1107a6cb-8e5a-5215-8b60-1d473d685075'}) 2025-09-03 00:40:42.156902 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9', 'vg_name': 'ceph-400ae980-4c36-5b9b-960d-631158f9c2c9'}) 2025-09-03 00:40:42.156913 | orchestrator | 2025-09-03 00:40:42.156924 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-03 00:40:42.156935 | orchestrator | Wednesday 03 September 2025 00:40:41 +0000 (0:00:00.125) 0:00:46.517 *** 2025-09-03 00:40:42.156946 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9', 'data_vg': 'ceph-400ae980-4c36-5b9b-960d-631158f9c2c9'})  2025-09-03 00:40:42.156957 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075', 'data_vg': 'ceph-1107a6cb-8e5a-5215-8b60-1d473d685075'})  2025-09-03 00:40:42.156968 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:42.156979 | orchestrator | 2025-09-03 00:40:42.156990 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-03 00:40:42.157001 | orchestrator | Wednesday 03 September 2025 00:40:42 +0000 (0:00:00.132) 0:00:46.650 *** 2025-09-03 00:40:42.157011 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9', 'data_vg': 'ceph-400ae980-4c36-5b9b-960d-631158f9c2c9'})  2025-09-03 00:40:42.157023 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075', 'data_vg': 'ceph-1107a6cb-8e5a-5215-8b60-1d473d685075'})  2025-09-03 00:40:42.157041 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:47.543228 | orchestrator | 2025-09-03 00:40:47.543347 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-03 00:40:47.543365 | orchestrator | Wednesday 03 September 2025 00:40:42 +0000 (0:00:00.138) 0:00:46.789 *** 2025-09-03 00:40:47.543379 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9', 'data_vg': 'ceph-400ae980-4c36-5b9b-960d-631158f9c2c9'})  2025-09-03 00:40:47.543392 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075', 'data_vg': 'ceph-1107a6cb-8e5a-5215-8b60-1d473d685075'})  2025-09-03 00:40:47.543404 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:40:47.543419 | orchestrator | 2025-09-03 00:40:47.543430 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-03 00:40:47.543442 | orchestrator | Wednesday 03 September 2025 00:40:42 +0000 (0:00:00.130) 0:00:46.919 *** 2025-09-03 00:40:47.543478 | orchestrator | ok: [testbed-node-4] => { 2025-09-03 00:40:47.543490 | orchestrator |  "lvm_report": { 2025-09-03 00:40:47.543503 | orchestrator |  "lv": [ 2025-09-03 00:40:47.543514 | orchestrator |  { 2025-09-03 00:40:47.543526 | orchestrator |  "lv_name": "osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075", 2025-09-03 00:40:47.543538 | orchestrator |  "vg_name": "ceph-1107a6cb-8e5a-5215-8b60-1d473d685075" 2025-09-03 00:40:47.543549 | orchestrator |  }, 2025-09-03 00:40:47.543560 | orchestrator |  { 2025-09-03 00:40:47.543571 | orchestrator |  "lv_name": "osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9", 2025-09-03 00:40:47.543582 | orchestrator |  "vg_name": "ceph-400ae980-4c36-5b9b-960d-631158f9c2c9" 2025-09-03 00:40:47.543593 | orchestrator |  } 2025-09-03 00:40:47.543604 | orchestrator |  ], 2025-09-03 00:40:47.543615 | orchestrator |  "pv": [ 2025-09-03 00:40:47.543626 | orchestrator |  { 2025-09-03 00:40:47.543637 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-03 00:40:47.543648 | orchestrator |  "vg_name": "ceph-400ae980-4c36-5b9b-960d-631158f9c2c9" 2025-09-03 00:40:47.543659 | orchestrator |  }, 2025-09-03 00:40:47.543670 | orchestrator |  { 2025-09-03 00:40:47.543681 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-03 00:40:47.543692 | orchestrator |  "vg_name": "ceph-1107a6cb-8e5a-5215-8b60-1d473d685075" 2025-09-03 00:40:47.543702 | orchestrator |  } 2025-09-03 00:40:47.543713 | orchestrator |  ] 2025-09-03 00:40:47.543726 | orchestrator |  } 2025-09-03 00:40:47.543740 | orchestrator | } 2025-09-03 00:40:47.543754 | orchestrator | 2025-09-03 00:40:47.543767 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-03 00:40:47.543780 | orchestrator | 2025-09-03 00:40:47.543793 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-03 00:40:47.543807 | orchestrator | Wednesday 03 September 2025 00:40:42 +0000 (0:00:00.417) 0:00:47.337 *** 2025-09-03 00:40:47.543820 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-03 00:40:47.543834 | orchestrator | 2025-09-03 00:40:47.543861 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-03 00:40:47.543874 | orchestrator | Wednesday 03 September 2025 00:40:42 +0000 (0:00:00.217) 0:00:47.554 *** 2025-09-03 00:40:47.543887 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:40:47.543901 | orchestrator | 2025-09-03 00:40:47.543914 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:47.543927 | orchestrator | Wednesday 03 September 2025 00:40:43 +0000 (0:00:00.205) 0:00:47.759 *** 2025-09-03 00:40:47.543940 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-03 00:40:47.543953 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-03 00:40:47.543965 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-03 00:40:47.543979 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-03 00:40:47.543991 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-03 00:40:47.544004 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-03 00:40:47.544016 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-03 00:40:47.544029 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-03 00:40:47.544041 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-03 00:40:47.544055 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-03 00:40:47.544067 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-03 00:40:47.544089 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-03 00:40:47.544100 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-03 00:40:47.544129 | orchestrator | 2025-09-03 00:40:47.544142 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:47.544152 | orchestrator | Wednesday 03 September 2025 00:40:43 +0000 (0:00:00.362) 0:00:48.122 *** 2025-09-03 00:40:47.544163 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:47.544178 | orchestrator | 2025-09-03 00:40:47.544189 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:47.544200 | orchestrator | Wednesday 03 September 2025 00:40:43 +0000 (0:00:00.152) 0:00:48.274 *** 2025-09-03 00:40:47.544210 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:47.544221 | orchestrator | 2025-09-03 00:40:47.544232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:47.544261 | orchestrator | Wednesday 03 September 2025 00:40:43 +0000 (0:00:00.179) 0:00:48.454 *** 2025-09-03 00:40:47.544274 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:47.544284 | orchestrator | 2025-09-03 00:40:47.544296 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:47.544306 | orchestrator | Wednesday 03 September 2025 00:40:43 +0000 (0:00:00.164) 0:00:48.618 *** 2025-09-03 00:40:47.544317 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:47.544328 | orchestrator | 2025-09-03 00:40:47.544339 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:47.544350 | orchestrator | Wednesday 03 September 2025 00:40:44 +0000 (0:00:00.157) 0:00:48.776 *** 2025-09-03 00:40:47.544361 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:47.544372 | orchestrator | 2025-09-03 00:40:47.544383 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:47.544394 | orchestrator | Wednesday 03 September 2025 00:40:44 +0000 (0:00:00.194) 0:00:48.971 *** 2025-09-03 00:40:47.544405 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:47.544416 | orchestrator | 2025-09-03 00:40:47.544426 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:47.544437 | orchestrator | Wednesday 03 September 2025 00:40:44 +0000 (0:00:00.432) 0:00:49.403 *** 2025-09-03 00:40:47.544448 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:47.544459 | orchestrator | 2025-09-03 00:40:47.544470 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:47.544481 | orchestrator | Wednesday 03 September 2025 00:40:44 +0000 (0:00:00.164) 0:00:49.568 *** 2025-09-03 00:40:47.544491 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:47.544502 | orchestrator | 2025-09-03 00:40:47.544513 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:47.544524 | orchestrator | Wednesday 03 September 2025 00:40:45 +0000 (0:00:00.168) 0:00:49.737 *** 2025-09-03 00:40:47.544535 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a) 2025-09-03 00:40:47.544547 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a) 2025-09-03 00:40:47.544558 | orchestrator | 2025-09-03 00:40:47.544569 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:47.544579 | orchestrator | Wednesday 03 September 2025 00:40:45 +0000 (0:00:00.418) 0:00:50.155 *** 2025-09-03 00:40:47.544590 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_409307c9-8e7f-483b-a404-5462fce46233) 2025-09-03 00:40:47.544601 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_409307c9-8e7f-483b-a404-5462fce46233) 2025-09-03 00:40:47.544612 | orchestrator | 2025-09-03 00:40:47.544623 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:47.544634 | orchestrator | Wednesday 03 September 2025 00:40:45 +0000 (0:00:00.431) 0:00:50.587 *** 2025-09-03 00:40:47.544659 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ce19fbd3-6a41-4577-8f91-9183654abf8c) 2025-09-03 00:40:47.544671 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ce19fbd3-6a41-4577-8f91-9183654abf8c) 2025-09-03 00:40:47.544682 | orchestrator | 2025-09-03 00:40:47.544692 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:47.544703 | orchestrator | Wednesday 03 September 2025 00:40:46 +0000 (0:00:00.403) 0:00:50.991 *** 2025-09-03 00:40:47.544714 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d4852aea-51af-4111-8e77-3990a105da37) 2025-09-03 00:40:47.544725 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d4852aea-51af-4111-8e77-3990a105da37) 2025-09-03 00:40:47.544736 | orchestrator | 2025-09-03 00:40:47.544747 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-03 00:40:47.544758 | orchestrator | Wednesday 03 September 2025 00:40:46 +0000 (0:00:00.443) 0:00:51.434 *** 2025-09-03 00:40:47.544768 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-03 00:40:47.544779 | orchestrator | 2025-09-03 00:40:47.544790 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:47.544801 | orchestrator | Wednesday 03 September 2025 00:40:47 +0000 (0:00:00.328) 0:00:51.762 *** 2025-09-03 00:40:47.544811 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-03 00:40:47.544822 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-03 00:40:47.544833 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-03 00:40:47.544844 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-03 00:40:47.544855 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-03 00:40:47.544866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-03 00:40:47.544877 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-03 00:40:47.544888 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-03 00:40:47.544899 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-03 00:40:47.544910 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-03 00:40:47.544921 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-03 00:40:47.544938 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-03 00:40:56.355995 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-03 00:40:56.356169 | orchestrator | 2025-09-03 00:40:56.356190 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:56.356204 | orchestrator | Wednesday 03 September 2025 00:40:47 +0000 (0:00:00.406) 0:00:52.169 *** 2025-09-03 00:40:56.356217 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:56.356231 | orchestrator | 2025-09-03 00:40:56.356243 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:56.356255 | orchestrator | Wednesday 03 September 2025 00:40:47 +0000 (0:00:00.187) 0:00:52.357 *** 2025-09-03 00:40:56.356266 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:56.356277 | orchestrator | 2025-09-03 00:40:56.356289 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:56.356300 | orchestrator | Wednesday 03 September 2025 00:40:47 +0000 (0:00:00.212) 0:00:52.570 *** 2025-09-03 00:40:56.356311 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:56.356322 | orchestrator | 2025-09-03 00:40:56.356333 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:56.356368 | orchestrator | Wednesday 03 September 2025 00:40:48 +0000 (0:00:00.706) 0:00:53.276 *** 2025-09-03 00:40:56.356380 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:56.356391 | orchestrator | 2025-09-03 00:40:56.356402 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:56.356413 | orchestrator | Wednesday 03 September 2025 00:40:48 +0000 (0:00:00.241) 0:00:53.517 *** 2025-09-03 00:40:56.356424 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:56.356435 | orchestrator | 2025-09-03 00:40:56.356447 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:56.356458 | orchestrator | Wednesday 03 September 2025 00:40:49 +0000 (0:00:00.198) 0:00:53.716 *** 2025-09-03 00:40:56.356469 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:56.356479 | orchestrator | 2025-09-03 00:40:56.356490 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:56.356501 | orchestrator | Wednesday 03 September 2025 00:40:49 +0000 (0:00:00.188) 0:00:53.905 *** 2025-09-03 00:40:56.356513 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:56.356527 | orchestrator | 2025-09-03 00:40:56.356539 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:56.356552 | orchestrator | Wednesday 03 September 2025 00:40:49 +0000 (0:00:00.189) 0:00:54.094 *** 2025-09-03 00:40:56.356565 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:56.356577 | orchestrator | 2025-09-03 00:40:56.356590 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:56.356603 | orchestrator | Wednesday 03 September 2025 00:40:49 +0000 (0:00:00.193) 0:00:54.288 *** 2025-09-03 00:40:56.356617 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-03 00:40:56.356630 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-03 00:40:56.356644 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-03 00:40:56.356657 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-03 00:40:56.356670 | orchestrator | 2025-09-03 00:40:56.356684 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:56.356697 | orchestrator | Wednesday 03 September 2025 00:40:50 +0000 (0:00:00.601) 0:00:54.889 *** 2025-09-03 00:40:56.356710 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:56.356723 | orchestrator | 2025-09-03 00:40:56.356739 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:56.356759 | orchestrator | Wednesday 03 September 2025 00:40:50 +0000 (0:00:00.184) 0:00:55.074 *** 2025-09-03 00:40:56.356779 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:56.356798 | orchestrator | 2025-09-03 00:40:56.356818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:56.356838 | orchestrator | Wednesday 03 September 2025 00:40:50 +0000 (0:00:00.206) 0:00:55.281 *** 2025-09-03 00:40:56.356860 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:56.356874 | orchestrator | 2025-09-03 00:40:56.356886 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-03 00:40:56.356897 | orchestrator | Wednesday 03 September 2025 00:40:50 +0000 (0:00:00.188) 0:00:55.469 *** 2025-09-03 00:40:56.356908 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:56.356919 | orchestrator | 2025-09-03 00:40:56.356929 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-03 00:40:56.356940 | orchestrator | Wednesday 03 September 2025 00:40:51 +0000 (0:00:00.190) 0:00:55.659 *** 2025-09-03 00:40:56.356951 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:56.356962 | orchestrator | 2025-09-03 00:40:56.356973 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-03 00:40:56.356984 | orchestrator | Wednesday 03 September 2025 00:40:51 +0000 (0:00:00.321) 0:00:55.981 *** 2025-09-03 00:40:56.356995 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e75c81d9-f6c1-538f-9534-cc9e3445127a'}}) 2025-09-03 00:40:56.357006 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '634e15af-8858-53e6-9f62-917e12b08878'}}) 2025-09-03 00:40:56.357027 | orchestrator | 2025-09-03 00:40:56.357038 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-03 00:40:56.357049 | orchestrator | Wednesday 03 September 2025 00:40:51 +0000 (0:00:00.186) 0:00:56.168 *** 2025-09-03 00:40:56.357061 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a', 'data_vg': 'ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a'}) 2025-09-03 00:40:56.357074 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-634e15af-8858-53e6-9f62-917e12b08878', 'data_vg': 'ceph-634e15af-8858-53e6-9f62-917e12b08878'}) 2025-09-03 00:40:56.357085 | orchestrator | 2025-09-03 00:40:56.357097 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-03 00:40:56.357158 | orchestrator | Wednesday 03 September 2025 00:40:53 +0000 (0:00:01.886) 0:00:58.054 *** 2025-09-03 00:40:56.357181 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a', 'data_vg': 'ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a'})  2025-09-03 00:40:56.357202 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-634e15af-8858-53e6-9f62-917e12b08878', 'data_vg': 'ceph-634e15af-8858-53e6-9f62-917e12b08878'})  2025-09-03 00:40:56.357214 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:56.357225 | orchestrator | 2025-09-03 00:40:56.357236 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-03 00:40:56.357247 | orchestrator | Wednesday 03 September 2025 00:40:53 +0000 (0:00:00.146) 0:00:58.201 *** 2025-09-03 00:40:56.357258 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a', 'data_vg': 'ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a'}) 2025-09-03 00:40:56.357288 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-634e15af-8858-53e6-9f62-917e12b08878', 'data_vg': 'ceph-634e15af-8858-53e6-9f62-917e12b08878'}) 2025-09-03 00:40:56.357301 | orchestrator | 2025-09-03 00:40:56.357312 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-03 00:40:56.357323 | orchestrator | Wednesday 03 September 2025 00:40:54 +0000 (0:00:01.270) 0:00:59.471 *** 2025-09-03 00:40:56.357334 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a', 'data_vg': 'ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a'})  2025-09-03 00:40:56.357345 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-634e15af-8858-53e6-9f62-917e12b08878', 'data_vg': 'ceph-634e15af-8858-53e6-9f62-917e12b08878'})  2025-09-03 00:40:56.357356 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:56.357367 | orchestrator | 2025-09-03 00:40:56.357378 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-03 00:40:56.357389 | orchestrator | Wednesday 03 September 2025 00:40:54 +0000 (0:00:00.153) 0:00:59.624 *** 2025-09-03 00:40:56.357400 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:56.357411 | orchestrator | 2025-09-03 00:40:56.357421 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-03 00:40:56.357432 | orchestrator | Wednesday 03 September 2025 00:40:55 +0000 (0:00:00.135) 0:00:59.760 *** 2025-09-03 00:40:56.357443 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a', 'data_vg': 'ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a'})  2025-09-03 00:40:56.357460 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-634e15af-8858-53e6-9f62-917e12b08878', 'data_vg': 'ceph-634e15af-8858-53e6-9f62-917e12b08878'})  2025-09-03 00:40:56.357471 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:56.357482 | orchestrator | 2025-09-03 00:40:56.357493 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-03 00:40:56.357504 | orchestrator | Wednesday 03 September 2025 00:40:55 +0000 (0:00:00.140) 0:00:59.900 *** 2025-09-03 00:40:56.357514 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:56.357534 | orchestrator | 2025-09-03 00:40:56.357545 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-03 00:40:56.357556 | orchestrator | Wednesday 03 September 2025 00:40:55 +0000 (0:00:00.129) 0:01:00.030 *** 2025-09-03 00:40:56.357567 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a', 'data_vg': 'ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a'})  2025-09-03 00:40:56.357578 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-634e15af-8858-53e6-9f62-917e12b08878', 'data_vg': 'ceph-634e15af-8858-53e6-9f62-917e12b08878'})  2025-09-03 00:40:56.357589 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:56.357600 | orchestrator | 2025-09-03 00:40:56.357611 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-03 00:40:56.357621 | orchestrator | Wednesday 03 September 2025 00:40:55 +0000 (0:00:00.149) 0:01:00.179 *** 2025-09-03 00:40:56.357632 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:56.357643 | orchestrator | 2025-09-03 00:40:56.357654 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-03 00:40:56.357664 | orchestrator | Wednesday 03 September 2025 00:40:55 +0000 (0:00:00.124) 0:01:00.304 *** 2025-09-03 00:40:56.357675 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a', 'data_vg': 'ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a'})  2025-09-03 00:40:56.357686 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-634e15af-8858-53e6-9f62-917e12b08878', 'data_vg': 'ceph-634e15af-8858-53e6-9f62-917e12b08878'})  2025-09-03 00:40:56.357697 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:40:56.357708 | orchestrator | 2025-09-03 00:40:56.357719 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-03 00:40:56.357730 | orchestrator | Wednesday 03 September 2025 00:40:55 +0000 (0:00:00.161) 0:01:00.465 *** 2025-09-03 00:40:56.357741 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:40:56.357752 | orchestrator | 2025-09-03 00:40:56.357763 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-03 00:40:56.357774 | orchestrator | Wednesday 03 September 2025 00:40:56 +0000 (0:00:00.341) 0:01:00.807 *** 2025-09-03 00:40:56.357793 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a', 'data_vg': 'ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a'})  2025-09-03 00:41:02.361798 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-634e15af-8858-53e6-9f62-917e12b08878', 'data_vg': 'ceph-634e15af-8858-53e6-9f62-917e12b08878'})  2025-09-03 00:41:02.361915 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:02.361935 | orchestrator | 2025-09-03 00:41:02.361950 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-03 00:41:02.361963 | orchestrator | Wednesday 03 September 2025 00:40:56 +0000 (0:00:00.182) 0:01:00.989 *** 2025-09-03 00:41:02.361975 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a', 'data_vg': 'ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a'})  2025-09-03 00:41:02.361988 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-634e15af-8858-53e6-9f62-917e12b08878', 'data_vg': 'ceph-634e15af-8858-53e6-9f62-917e12b08878'})  2025-09-03 00:41:02.361999 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:02.362011 | orchestrator | 2025-09-03 00:41:02.362079 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-03 00:41:02.362091 | orchestrator | Wednesday 03 September 2025 00:40:56 +0000 (0:00:00.146) 0:01:01.136 *** 2025-09-03 00:41:02.362103 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a', 'data_vg': 'ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a'})  2025-09-03 00:41:02.362146 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-634e15af-8858-53e6-9f62-917e12b08878', 'data_vg': 'ceph-634e15af-8858-53e6-9f62-917e12b08878'})  2025-09-03 00:41:02.362157 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:02.362195 | orchestrator | 2025-09-03 00:41:02.362207 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-03 00:41:02.362218 | orchestrator | Wednesday 03 September 2025 00:40:56 +0000 (0:00:00.158) 0:01:01.294 *** 2025-09-03 00:41:02.362229 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:02.362240 | orchestrator | 2025-09-03 00:41:02.362251 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-03 00:41:02.362262 | orchestrator | Wednesday 03 September 2025 00:40:56 +0000 (0:00:00.148) 0:01:01.443 *** 2025-09-03 00:41:02.362273 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:02.362284 | orchestrator | 2025-09-03 00:41:02.362295 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-03 00:41:02.362306 | orchestrator | Wednesday 03 September 2025 00:40:56 +0000 (0:00:00.146) 0:01:01.589 *** 2025-09-03 00:41:02.362318 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:02.362331 | orchestrator | 2025-09-03 00:41:02.362345 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-03 00:41:02.362372 | orchestrator | Wednesday 03 September 2025 00:40:57 +0000 (0:00:00.147) 0:01:01.737 *** 2025-09-03 00:41:02.362386 | orchestrator | ok: [testbed-node-5] => { 2025-09-03 00:41:02.362400 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-03 00:41:02.362415 | orchestrator | } 2025-09-03 00:41:02.362429 | orchestrator | 2025-09-03 00:41:02.362442 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-03 00:41:02.362456 | orchestrator | Wednesday 03 September 2025 00:40:57 +0000 (0:00:00.141) 0:01:01.879 *** 2025-09-03 00:41:02.362468 | orchestrator | ok: [testbed-node-5] => { 2025-09-03 00:41:02.362482 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-03 00:41:02.362494 | orchestrator | } 2025-09-03 00:41:02.362507 | orchestrator | 2025-09-03 00:41:02.362520 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-03 00:41:02.362534 | orchestrator | Wednesday 03 September 2025 00:40:57 +0000 (0:00:00.144) 0:01:02.024 *** 2025-09-03 00:41:02.362546 | orchestrator | ok: [testbed-node-5] => { 2025-09-03 00:41:02.362559 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-03 00:41:02.362572 | orchestrator | } 2025-09-03 00:41:02.362585 | orchestrator | 2025-09-03 00:41:02.362598 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-03 00:41:02.362610 | orchestrator | Wednesday 03 September 2025 00:40:57 +0000 (0:00:00.148) 0:01:02.172 *** 2025-09-03 00:41:02.362624 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:41:02.362638 | orchestrator | 2025-09-03 00:41:02.362651 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-03 00:41:02.362665 | orchestrator | Wednesday 03 September 2025 00:40:58 +0000 (0:00:00.519) 0:01:02.692 *** 2025-09-03 00:41:02.362678 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:41:02.362688 | orchestrator | 2025-09-03 00:41:02.362699 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-03 00:41:02.362711 | orchestrator | Wednesday 03 September 2025 00:40:58 +0000 (0:00:00.511) 0:01:03.203 *** 2025-09-03 00:41:02.362722 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:41:02.362733 | orchestrator | 2025-09-03 00:41:02.362743 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-03 00:41:02.362754 | orchestrator | Wednesday 03 September 2025 00:40:59 +0000 (0:00:00.743) 0:01:03.947 *** 2025-09-03 00:41:02.362766 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:41:02.362777 | orchestrator | 2025-09-03 00:41:02.362788 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-03 00:41:02.362799 | orchestrator | Wednesday 03 September 2025 00:40:59 +0000 (0:00:00.144) 0:01:04.091 *** 2025-09-03 00:41:02.362810 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:02.362820 | orchestrator | 2025-09-03 00:41:02.362831 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-03 00:41:02.362843 | orchestrator | Wednesday 03 September 2025 00:40:59 +0000 (0:00:00.127) 0:01:04.219 *** 2025-09-03 00:41:02.362865 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:02.362876 | orchestrator | 2025-09-03 00:41:02.362887 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-03 00:41:02.362898 | orchestrator | Wednesday 03 September 2025 00:40:59 +0000 (0:00:00.132) 0:01:04.351 *** 2025-09-03 00:41:02.362910 | orchestrator | ok: [testbed-node-5] => { 2025-09-03 00:41:02.362921 | orchestrator |  "vgs_report": { 2025-09-03 00:41:02.362934 | orchestrator |  "vg": [] 2025-09-03 00:41:02.362963 | orchestrator |  } 2025-09-03 00:41:02.362976 | orchestrator | } 2025-09-03 00:41:02.362987 | orchestrator | 2025-09-03 00:41:02.362999 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-03 00:41:02.363010 | orchestrator | Wednesday 03 September 2025 00:40:59 +0000 (0:00:00.141) 0:01:04.493 *** 2025-09-03 00:41:02.363021 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:02.363033 | orchestrator | 2025-09-03 00:41:02.363044 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-03 00:41:02.363055 | orchestrator | Wednesday 03 September 2025 00:40:59 +0000 (0:00:00.136) 0:01:04.630 *** 2025-09-03 00:41:02.363066 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:02.363078 | orchestrator | 2025-09-03 00:41:02.363089 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-03 00:41:02.363100 | orchestrator | Wednesday 03 September 2025 00:41:00 +0000 (0:00:00.136) 0:01:04.766 *** 2025-09-03 00:41:02.363128 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:02.363140 | orchestrator | 2025-09-03 00:41:02.363151 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-03 00:41:02.363162 | orchestrator | Wednesday 03 September 2025 00:41:00 +0000 (0:00:00.127) 0:01:04.894 *** 2025-09-03 00:41:02.363173 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:02.363184 | orchestrator | 2025-09-03 00:41:02.363196 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-03 00:41:02.363207 | orchestrator | Wednesday 03 September 2025 00:41:00 +0000 (0:00:00.131) 0:01:05.025 *** 2025-09-03 00:41:02.363218 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:02.363229 | orchestrator | 2025-09-03 00:41:02.363240 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-03 00:41:02.363251 | orchestrator | Wednesday 03 September 2025 00:41:00 +0000 (0:00:00.144) 0:01:05.170 *** 2025-09-03 00:41:02.363262 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:02.363273 | orchestrator | 2025-09-03 00:41:02.363284 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-03 00:41:02.363295 | orchestrator | Wednesday 03 September 2025 00:41:00 +0000 (0:00:00.137) 0:01:05.307 *** 2025-09-03 00:41:02.363305 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:02.363316 | orchestrator | 2025-09-03 00:41:02.363327 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-03 00:41:02.363338 | orchestrator | Wednesday 03 September 2025 00:41:00 +0000 (0:00:00.135) 0:01:05.443 *** 2025-09-03 00:41:02.363349 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:02.363360 | orchestrator | 2025-09-03 00:41:02.363371 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-03 00:41:02.363382 | orchestrator | Wednesday 03 September 2025 00:41:00 +0000 (0:00:00.128) 0:01:05.571 *** 2025-09-03 00:41:02.363393 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:02.363404 | orchestrator | 2025-09-03 00:41:02.363415 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-03 00:41:02.363433 | orchestrator | Wednesday 03 September 2025 00:41:01 +0000 (0:00:00.326) 0:01:05.897 *** 2025-09-03 00:41:02.363445 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:02.363456 | orchestrator | 2025-09-03 00:41:02.363467 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-03 00:41:02.363478 | orchestrator | Wednesday 03 September 2025 00:41:01 +0000 (0:00:00.142) 0:01:06.040 *** 2025-09-03 00:41:02.363489 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:02.363508 | orchestrator | 2025-09-03 00:41:02.363519 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-03 00:41:02.363530 | orchestrator | Wednesday 03 September 2025 00:41:01 +0000 (0:00:00.122) 0:01:06.163 *** 2025-09-03 00:41:02.363541 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:02.363552 | orchestrator | 2025-09-03 00:41:02.363563 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-03 00:41:02.363575 | orchestrator | Wednesday 03 September 2025 00:41:01 +0000 (0:00:00.134) 0:01:06.297 *** 2025-09-03 00:41:02.363586 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:02.363597 | orchestrator | 2025-09-03 00:41:02.363608 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-03 00:41:02.363619 | orchestrator | Wednesday 03 September 2025 00:41:01 +0000 (0:00:00.133) 0:01:06.430 *** 2025-09-03 00:41:02.363630 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:02.363641 | orchestrator | 2025-09-03 00:41:02.363653 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-03 00:41:02.363664 | orchestrator | Wednesday 03 September 2025 00:41:01 +0000 (0:00:00.138) 0:01:06.569 *** 2025-09-03 00:41:02.363675 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a', 'data_vg': 'ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a'})  2025-09-03 00:41:02.363687 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-634e15af-8858-53e6-9f62-917e12b08878', 'data_vg': 'ceph-634e15af-8858-53e6-9f62-917e12b08878'})  2025-09-03 00:41:02.363699 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:02.363710 | orchestrator | 2025-09-03 00:41:02.363721 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-03 00:41:02.363732 | orchestrator | Wednesday 03 September 2025 00:41:02 +0000 (0:00:00.150) 0:01:06.719 *** 2025-09-03 00:41:02.363743 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a', 'data_vg': 'ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a'})  2025-09-03 00:41:02.363755 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-634e15af-8858-53e6-9f62-917e12b08878', 'data_vg': 'ceph-634e15af-8858-53e6-9f62-917e12b08878'})  2025-09-03 00:41:02.363766 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:02.363777 | orchestrator | 2025-09-03 00:41:02.363788 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-03 00:41:02.363800 | orchestrator | Wednesday 03 September 2025 00:41:02 +0000 (0:00:00.139) 0:01:06.859 *** 2025-09-03 00:41:02.363818 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a', 'data_vg': 'ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a'})  2025-09-03 00:41:05.224627 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-634e15af-8858-53e6-9f62-917e12b08878', 'data_vg': 'ceph-634e15af-8858-53e6-9f62-917e12b08878'})  2025-09-03 00:41:05.224737 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:05.224754 | orchestrator | 2025-09-03 00:41:05.224767 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-03 00:41:05.224780 | orchestrator | Wednesday 03 September 2025 00:41:02 +0000 (0:00:00.136) 0:01:06.996 *** 2025-09-03 00:41:05.224792 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a', 'data_vg': 'ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a'})  2025-09-03 00:41:05.224804 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-634e15af-8858-53e6-9f62-917e12b08878', 'data_vg': 'ceph-634e15af-8858-53e6-9f62-917e12b08878'})  2025-09-03 00:41:05.224815 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:05.224827 | orchestrator | 2025-09-03 00:41:05.224838 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-03 00:41:05.224849 | orchestrator | Wednesday 03 September 2025 00:41:02 +0000 (0:00:00.144) 0:01:07.140 *** 2025-09-03 00:41:05.224860 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a', 'data_vg': 'ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a'})  2025-09-03 00:41:05.224913 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-634e15af-8858-53e6-9f62-917e12b08878', 'data_vg': 'ceph-634e15af-8858-53e6-9f62-917e12b08878'})  2025-09-03 00:41:05.224925 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:05.224936 | orchestrator | 2025-09-03 00:41:05.224947 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-03 00:41:05.224958 | orchestrator | Wednesday 03 September 2025 00:41:02 +0000 (0:00:00.148) 0:01:07.289 *** 2025-09-03 00:41:05.224968 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a', 'data_vg': 'ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a'})  2025-09-03 00:41:05.224980 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-634e15af-8858-53e6-9f62-917e12b08878', 'data_vg': 'ceph-634e15af-8858-53e6-9f62-917e12b08878'})  2025-09-03 00:41:05.224990 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:05.225001 | orchestrator | 2025-09-03 00:41:05.225012 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-03 00:41:05.225023 | orchestrator | Wednesday 03 September 2025 00:41:02 +0000 (0:00:00.143) 0:01:07.432 *** 2025-09-03 00:41:05.225034 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a', 'data_vg': 'ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a'})  2025-09-03 00:41:05.225045 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-634e15af-8858-53e6-9f62-917e12b08878', 'data_vg': 'ceph-634e15af-8858-53e6-9f62-917e12b08878'})  2025-09-03 00:41:05.225057 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:05.225067 | orchestrator | 2025-09-03 00:41:05.225078 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-03 00:41:05.225090 | orchestrator | Wednesday 03 September 2025 00:41:03 +0000 (0:00:00.340) 0:01:07.773 *** 2025-09-03 00:41:05.225101 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a', 'data_vg': 'ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a'})  2025-09-03 00:41:05.225132 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-634e15af-8858-53e6-9f62-917e12b08878', 'data_vg': 'ceph-634e15af-8858-53e6-9f62-917e12b08878'})  2025-09-03 00:41:05.225144 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:05.225154 | orchestrator | 2025-09-03 00:41:05.225166 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-03 00:41:05.225177 | orchestrator | Wednesday 03 September 2025 00:41:03 +0000 (0:00:00.149) 0:01:07.922 *** 2025-09-03 00:41:05.225188 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:41:05.225200 | orchestrator | 2025-09-03 00:41:05.225211 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-03 00:41:05.225222 | orchestrator | Wednesday 03 September 2025 00:41:03 +0000 (0:00:00.504) 0:01:08.427 *** 2025-09-03 00:41:05.225233 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:41:05.225243 | orchestrator | 2025-09-03 00:41:05.225254 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-03 00:41:05.225265 | orchestrator | Wednesday 03 September 2025 00:41:04 +0000 (0:00:00.520) 0:01:08.947 *** 2025-09-03 00:41:05.225276 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:41:05.225287 | orchestrator | 2025-09-03 00:41:05.225298 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-03 00:41:05.225309 | orchestrator | Wednesday 03 September 2025 00:41:04 +0000 (0:00:00.139) 0:01:09.086 *** 2025-09-03 00:41:05.225320 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-634e15af-8858-53e6-9f62-917e12b08878', 'vg_name': 'ceph-634e15af-8858-53e6-9f62-917e12b08878'}) 2025-09-03 00:41:05.225332 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a', 'vg_name': 'ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a'}) 2025-09-03 00:41:05.225343 | orchestrator | 2025-09-03 00:41:05.225354 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-03 00:41:05.225373 | orchestrator | Wednesday 03 September 2025 00:41:04 +0000 (0:00:00.164) 0:01:09.251 *** 2025-09-03 00:41:05.225401 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a', 'data_vg': 'ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a'})  2025-09-03 00:41:05.225413 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-634e15af-8858-53e6-9f62-917e12b08878', 'data_vg': 'ceph-634e15af-8858-53e6-9f62-917e12b08878'})  2025-09-03 00:41:05.225425 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:05.225436 | orchestrator | 2025-09-03 00:41:05.225447 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-03 00:41:05.225458 | orchestrator | Wednesday 03 September 2025 00:41:04 +0000 (0:00:00.147) 0:01:09.398 *** 2025-09-03 00:41:05.225469 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a', 'data_vg': 'ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a'})  2025-09-03 00:41:05.225480 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-634e15af-8858-53e6-9f62-917e12b08878', 'data_vg': 'ceph-634e15af-8858-53e6-9f62-917e12b08878'})  2025-09-03 00:41:05.225492 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:05.225503 | orchestrator | 2025-09-03 00:41:05.225514 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-03 00:41:05.225525 | orchestrator | Wednesday 03 September 2025 00:41:04 +0000 (0:00:00.145) 0:01:09.544 *** 2025-09-03 00:41:05.225536 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a', 'data_vg': 'ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a'})  2025-09-03 00:41:05.225569 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-634e15af-8858-53e6-9f62-917e12b08878', 'data_vg': 'ceph-634e15af-8858-53e6-9f62-917e12b08878'})  2025-09-03 00:41:05.225580 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:05.225592 | orchestrator | 2025-09-03 00:41:05.225603 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-03 00:41:05.225614 | orchestrator | Wednesday 03 September 2025 00:41:05 +0000 (0:00:00.152) 0:01:09.696 *** 2025-09-03 00:41:05.225625 | orchestrator | ok: [testbed-node-5] => { 2025-09-03 00:41:05.225636 | orchestrator |  "lvm_report": { 2025-09-03 00:41:05.225647 | orchestrator |  "lv": [ 2025-09-03 00:41:05.225659 | orchestrator |  { 2025-09-03 00:41:05.225670 | orchestrator |  "lv_name": "osd-block-634e15af-8858-53e6-9f62-917e12b08878", 2025-09-03 00:41:05.225687 | orchestrator |  "vg_name": "ceph-634e15af-8858-53e6-9f62-917e12b08878" 2025-09-03 00:41:05.225698 | orchestrator |  }, 2025-09-03 00:41:05.225710 | orchestrator |  { 2025-09-03 00:41:05.225721 | orchestrator |  "lv_name": "osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a", 2025-09-03 00:41:05.225732 | orchestrator |  "vg_name": "ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a" 2025-09-03 00:41:05.225743 | orchestrator |  } 2025-09-03 00:41:05.225754 | orchestrator |  ], 2025-09-03 00:41:05.225765 | orchestrator |  "pv": [ 2025-09-03 00:41:05.225776 | orchestrator |  { 2025-09-03 00:41:05.225788 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-03 00:41:05.225799 | orchestrator |  "vg_name": "ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a" 2025-09-03 00:41:05.225810 | orchestrator |  }, 2025-09-03 00:41:05.225820 | orchestrator |  { 2025-09-03 00:41:05.225832 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-03 00:41:05.225843 | orchestrator |  "vg_name": "ceph-634e15af-8858-53e6-9f62-917e12b08878" 2025-09-03 00:41:05.225854 | orchestrator |  } 2025-09-03 00:41:05.225865 | orchestrator |  ] 2025-09-03 00:41:05.225876 | orchestrator |  } 2025-09-03 00:41:05.225887 | orchestrator | } 2025-09-03 00:41:05.225899 | orchestrator | 2025-09-03 00:41:05.225910 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:41:05.225929 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-03 00:41:05.225940 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-03 00:41:05.225951 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-03 00:41:05.225963 | orchestrator | 2025-09-03 00:41:05.225974 | orchestrator | 2025-09-03 00:41:05.225985 | orchestrator | 2025-09-03 00:41:05.225996 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:41:05.226007 | orchestrator | Wednesday 03 September 2025 00:41:05 +0000 (0:00:00.135) 0:01:09.832 *** 2025-09-03 00:41:05.226074 | orchestrator | =============================================================================== 2025-09-03 00:41:05.226086 | orchestrator | Create block VGs -------------------------------------------------------- 5.72s 2025-09-03 00:41:05.226097 | orchestrator | Create block LVs -------------------------------------------------------- 4.09s 2025-09-03 00:41:05.226129 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.77s 2025-09-03 00:41:05.226140 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.76s 2025-09-03 00:41:05.226151 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.60s 2025-09-03 00:41:05.226162 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.57s 2025-09-03 00:41:05.226207 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.54s 2025-09-03 00:41:05.226218 | orchestrator | Add known partitions to the list of available block devices ------------- 1.42s 2025-09-03 00:41:05.226237 | orchestrator | Add known links to the list of available block devices ------------------ 1.17s 2025-09-03 00:41:05.605492 | orchestrator | Add known partitions to the list of available block devices ------------- 1.11s 2025-09-03 00:41:05.605588 | orchestrator | Print LVM report data --------------------------------------------------- 0.86s 2025-09-03 00:41:05.605601 | orchestrator | Add known partitions to the list of available block devices ------------- 0.83s 2025-09-03 00:41:05.605612 | orchestrator | Add known links to the list of available block devices ------------------ 0.81s 2025-09-03 00:41:05.605623 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.77s 2025-09-03 00:41:05.605634 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.73s 2025-09-03 00:41:05.605646 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2025-09-03 00:41:05.605657 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.69s 2025-09-03 00:41:05.605668 | orchestrator | Get initial list of available block devices ----------------------------- 0.68s 2025-09-03 00:41:05.605678 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.66s 2025-09-03 00:41:05.605689 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.65s 2025-09-03 00:41:17.821536 | orchestrator | 2025-09-03 00:41:17 | INFO  | Task e61c0ac5-21d8-45d8-9dee-ffee37d995f9 (facts) was prepared for execution. 2025-09-03 00:41:17.821659 | orchestrator | 2025-09-03 00:41:17 | INFO  | It takes a moment until task e61c0ac5-21d8-45d8-9dee-ffee37d995f9 (facts) has been started and output is visible here. 2025-09-03 00:41:28.735450 | orchestrator | 2025-09-03 00:41:28.735580 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-03 00:41:28.735597 | orchestrator | 2025-09-03 00:41:28.735609 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-03 00:41:28.735621 | orchestrator | Wednesday 03 September 2025 00:41:21 +0000 (0:00:00.200) 0:00:00.200 *** 2025-09-03 00:41:28.735633 | orchestrator | ok: [testbed-manager] 2025-09-03 00:41:28.735647 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:41:28.735685 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:41:28.735697 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:41:28.735708 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:41:28.735719 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:41:28.735731 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:41:28.735742 | orchestrator | 2025-09-03 00:41:28.735753 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-03 00:41:28.735764 | orchestrator | Wednesday 03 September 2025 00:41:22 +0000 (0:00:00.858) 0:00:01.059 *** 2025-09-03 00:41:28.735792 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:41:28.735805 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:41:28.735816 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:41:28.735827 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:41:28.735839 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:41:28.735850 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:41:28.735861 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:28.735872 | orchestrator | 2025-09-03 00:41:28.735883 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-03 00:41:28.735894 | orchestrator | 2025-09-03 00:41:28.735905 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-03 00:41:28.735916 | orchestrator | Wednesday 03 September 2025 00:41:23 +0000 (0:00:01.056) 0:00:02.115 *** 2025-09-03 00:41:28.735927 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:41:28.735938 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:41:28.735949 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:41:28.735960 | orchestrator | ok: [testbed-manager] 2025-09-03 00:41:28.735971 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:41:28.735985 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:41:28.735997 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:41:28.736010 | orchestrator | 2025-09-03 00:41:28.736023 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-03 00:41:28.736035 | orchestrator | 2025-09-03 00:41:28.736048 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-03 00:41:28.736061 | orchestrator | Wednesday 03 September 2025 00:41:28 +0000 (0:00:04.791) 0:00:06.907 *** 2025-09-03 00:41:28.736074 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:41:28.736087 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:41:28.736138 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:41:28.736151 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:41:28.736163 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:41:28.736176 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:41:28.736189 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:41:28.736201 | orchestrator | 2025-09-03 00:41:28.736213 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:41:28.736227 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:41:28.736240 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:41:28.736253 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:41:28.736266 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:41:28.736278 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:41:28.736290 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:41:28.736303 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:41:28.736329 | orchestrator | 2025-09-03 00:41:28.736340 | orchestrator | 2025-09-03 00:41:28.736351 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:41:28.736362 | orchestrator | Wednesday 03 September 2025 00:41:28 +0000 (0:00:00.422) 0:00:07.329 *** 2025-09-03 00:41:28.736372 | orchestrator | =============================================================================== 2025-09-03 00:41:28.736383 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.79s 2025-09-03 00:41:28.736394 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.06s 2025-09-03 00:41:28.736404 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.86s 2025-09-03 00:41:28.736415 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.42s 2025-09-03 00:41:40.940533 | orchestrator | 2025-09-03 00:41:40 | INFO  | Task 1071c82b-c840-4542-8084-1f7b263eca25 (frr) was prepared for execution. 2025-09-03 00:41:40.940680 | orchestrator | 2025-09-03 00:41:40 | INFO  | It takes a moment until task 1071c82b-c840-4542-8084-1f7b263eca25 (frr) has been started and output is visible here. 2025-09-03 00:42:06.435172 | orchestrator | 2025-09-03 00:42:06.435299 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-03 00:42:06.435311 | orchestrator | 2025-09-03 00:42:06.435319 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-03 00:42:06.435328 | orchestrator | Wednesday 03 September 2025 00:41:44 +0000 (0:00:00.259) 0:00:00.259 *** 2025-09-03 00:42:06.435335 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-03 00:42:06.435344 | orchestrator | 2025-09-03 00:42:06.435351 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-03 00:42:06.435358 | orchestrator | Wednesday 03 September 2025 00:41:45 +0000 (0:00:00.226) 0:00:00.486 *** 2025-09-03 00:42:06.435365 | orchestrator | changed: [testbed-manager] 2025-09-03 00:42:06.435375 | orchestrator | 2025-09-03 00:42:06.435382 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-03 00:42:06.435389 | orchestrator | Wednesday 03 September 2025 00:41:46 +0000 (0:00:01.147) 0:00:01.633 *** 2025-09-03 00:42:06.435396 | orchestrator | changed: [testbed-manager] 2025-09-03 00:42:06.435402 | orchestrator | 2025-09-03 00:42:06.435423 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-03 00:42:06.435431 | orchestrator | Wednesday 03 September 2025 00:41:55 +0000 (0:00:09.242) 0:00:10.876 *** 2025-09-03 00:42:06.435438 | orchestrator | ok: [testbed-manager] 2025-09-03 00:42:06.435445 | orchestrator | 2025-09-03 00:42:06.435452 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-03 00:42:06.435459 | orchestrator | Wednesday 03 September 2025 00:41:56 +0000 (0:00:01.194) 0:00:12.071 *** 2025-09-03 00:42:06.435466 | orchestrator | changed: [testbed-manager] 2025-09-03 00:42:06.435472 | orchestrator | 2025-09-03 00:42:06.435479 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-03 00:42:06.435486 | orchestrator | Wednesday 03 September 2025 00:41:57 +0000 (0:00:00.893) 0:00:12.965 *** 2025-09-03 00:42:06.435493 | orchestrator | ok: [testbed-manager] 2025-09-03 00:42:06.435500 | orchestrator | 2025-09-03 00:42:06.435507 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-03 00:42:06.435514 | orchestrator | Wednesday 03 September 2025 00:41:58 +0000 (0:00:01.114) 0:00:14.079 *** 2025-09-03 00:42:06.435521 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-03 00:42:06.435528 | orchestrator | 2025-09-03 00:42:06.435535 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-03 00:42:06.435542 | orchestrator | Wednesday 03 September 2025 00:41:59 +0000 (0:00:00.796) 0:00:14.876 *** 2025-09-03 00:42:06.435549 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:42:06.435556 | orchestrator | 2025-09-03 00:42:06.435563 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-03 00:42:06.435590 | orchestrator | Wednesday 03 September 2025 00:41:59 +0000 (0:00:00.158) 0:00:15.034 *** 2025-09-03 00:42:06.435598 | orchestrator | changed: [testbed-manager] 2025-09-03 00:42:06.435605 | orchestrator | 2025-09-03 00:42:06.435612 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-03 00:42:06.435619 | orchestrator | Wednesday 03 September 2025 00:42:01 +0000 (0:00:01.892) 0:00:16.926 *** 2025-09-03 00:42:06.435625 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-03 00:42:06.435632 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-03 00:42:06.435640 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-03 00:42:06.435647 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-03 00:42:06.435654 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-03 00:42:06.435661 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-03 00:42:06.435668 | orchestrator | 2025-09-03 00:42:06.435675 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-03 00:42:06.435681 | orchestrator | Wednesday 03 September 2025 00:42:03 +0000 (0:00:02.095) 0:00:19.022 *** 2025-09-03 00:42:06.435688 | orchestrator | ok: [testbed-manager] 2025-09-03 00:42:06.435695 | orchestrator | 2025-09-03 00:42:06.435702 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-03 00:42:06.435708 | orchestrator | Wednesday 03 September 2025 00:42:04 +0000 (0:00:01.268) 0:00:20.290 *** 2025-09-03 00:42:06.435715 | orchestrator | changed: [testbed-manager] 2025-09-03 00:42:06.435722 | orchestrator | 2025-09-03 00:42:06.435729 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:42:06.435736 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-03 00:42:06.435743 | orchestrator | 2025-09-03 00:42:06.435750 | orchestrator | 2025-09-03 00:42:06.435756 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:42:06.435763 | orchestrator | Wednesday 03 September 2025 00:42:06 +0000 (0:00:01.280) 0:00:21.571 *** 2025-09-03 00:42:06.435770 | orchestrator | =============================================================================== 2025-09-03 00:42:06.435777 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.24s 2025-09-03 00:42:06.435783 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.10s 2025-09-03 00:42:06.435790 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 1.89s 2025-09-03 00:42:06.435797 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.28s 2025-09-03 00:42:06.435817 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.27s 2025-09-03 00:42:06.435825 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.19s 2025-09-03 00:42:06.435832 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.15s 2025-09-03 00:42:06.435838 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.11s 2025-09-03 00:42:06.435845 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.89s 2025-09-03 00:42:06.435852 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.80s 2025-09-03 00:42:06.435859 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2025-09-03 00:42:06.435865 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.16s 2025-09-03 00:42:06.700913 | orchestrator | 2025-09-03 00:42:06.703808 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed Sep 3 00:42:06 UTC 2025 2025-09-03 00:42:06.703866 | orchestrator | 2025-09-03 00:42:08.502881 | orchestrator | 2025-09-03 00:42:08 | INFO  | Collection nutshell is prepared for execution 2025-09-03 00:42:08.502986 | orchestrator | 2025-09-03 00:42:08 | INFO  | D [0] - dotfiles 2025-09-03 00:42:18.588714 | orchestrator | 2025-09-03 00:42:18 | INFO  | D [0] - homer 2025-09-03 00:42:18.588821 | orchestrator | 2025-09-03 00:42:18 | INFO  | D [0] - netdata 2025-09-03 00:42:18.588833 | orchestrator | 2025-09-03 00:42:18 | INFO  | D [0] - openstackclient 2025-09-03 00:42:18.588841 | orchestrator | 2025-09-03 00:42:18 | INFO  | D [0] - phpmyadmin 2025-09-03 00:42:18.589013 | orchestrator | 2025-09-03 00:42:18 | INFO  | A [0] - common 2025-09-03 00:42:18.592974 | orchestrator | 2025-09-03 00:42:18 | INFO  | A [1] -- loadbalancer 2025-09-03 00:42:18.593013 | orchestrator | 2025-09-03 00:42:18 | INFO  | D [2] --- opensearch 2025-09-03 00:42:18.593528 | orchestrator | 2025-09-03 00:42:18 | INFO  | A [2] --- mariadb-ng 2025-09-03 00:42:18.593702 | orchestrator | 2025-09-03 00:42:18 | INFO  | D [3] ---- horizon 2025-09-03 00:42:18.594189 | orchestrator | 2025-09-03 00:42:18 | INFO  | A [3] ---- keystone 2025-09-03 00:42:18.594308 | orchestrator | 2025-09-03 00:42:18 | INFO  | A [4] ----- neutron 2025-09-03 00:42:18.594896 | orchestrator | 2025-09-03 00:42:18 | INFO  | D [5] ------ wait-for-nova 2025-09-03 00:42:18.594919 | orchestrator | 2025-09-03 00:42:18 | INFO  | A [5] ------ octavia 2025-09-03 00:42:18.596325 | orchestrator | 2025-09-03 00:42:18 | INFO  | D [4] ----- barbican 2025-09-03 00:42:18.596348 | orchestrator | 2025-09-03 00:42:18 | INFO  | D [4] ----- designate 2025-09-03 00:42:18.596595 | orchestrator | 2025-09-03 00:42:18 | INFO  | D [4] ----- ironic 2025-09-03 00:42:18.596787 | orchestrator | 2025-09-03 00:42:18 | INFO  | D [4] ----- placement 2025-09-03 00:42:18.596988 | orchestrator | 2025-09-03 00:42:18 | INFO  | D [4] ----- magnum 2025-09-03 00:42:18.597993 | orchestrator | 2025-09-03 00:42:18 | INFO  | A [1] -- openvswitch 2025-09-03 00:42:18.598146 | orchestrator | 2025-09-03 00:42:18 | INFO  | D [2] --- ovn 2025-09-03 00:42:18.598582 | orchestrator | 2025-09-03 00:42:18 | INFO  | D [1] -- memcached 2025-09-03 00:42:18.598830 | orchestrator | 2025-09-03 00:42:18 | INFO  | D [1] -- redis 2025-09-03 00:42:18.598851 | orchestrator | 2025-09-03 00:42:18 | INFO  | D [1] -- rabbitmq-ng 2025-09-03 00:42:18.599368 | orchestrator | 2025-09-03 00:42:18 | INFO  | A [0] - kubernetes 2025-09-03 00:42:18.601711 | orchestrator | 2025-09-03 00:42:18 | INFO  | D [1] -- kubeconfig 2025-09-03 00:42:18.601731 | orchestrator | 2025-09-03 00:42:18 | INFO  | A [1] -- copy-kubeconfig 2025-09-03 00:42:18.602059 | orchestrator | 2025-09-03 00:42:18 | INFO  | A [0] - ceph 2025-09-03 00:42:18.604252 | orchestrator | 2025-09-03 00:42:18 | INFO  | A [1] -- ceph-pools 2025-09-03 00:42:18.604274 | orchestrator | 2025-09-03 00:42:18 | INFO  | A [2] --- copy-ceph-keys 2025-09-03 00:42:18.604447 | orchestrator | 2025-09-03 00:42:18 | INFO  | A [3] ---- cephclient 2025-09-03 00:42:18.604466 | orchestrator | 2025-09-03 00:42:18 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-03 00:42:18.604830 | orchestrator | 2025-09-03 00:42:18 | INFO  | A [4] ----- wait-for-keystone 2025-09-03 00:42:18.604849 | orchestrator | 2025-09-03 00:42:18 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-03 00:42:18.605376 | orchestrator | 2025-09-03 00:42:18 | INFO  | D [5] ------ glance 2025-09-03 00:42:18.605396 | orchestrator | 2025-09-03 00:42:18 | INFO  | D [5] ------ cinder 2025-09-03 00:42:18.605408 | orchestrator | 2025-09-03 00:42:18 | INFO  | D [5] ------ nova 2025-09-03 00:42:18.605774 | orchestrator | 2025-09-03 00:42:18 | INFO  | A [4] ----- prometheus 2025-09-03 00:42:18.605794 | orchestrator | 2025-09-03 00:42:18 | INFO  | D [5] ------ grafana 2025-09-03 00:42:18.808415 | orchestrator | 2025-09-03 00:42:18 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-03 00:42:18.808488 | orchestrator | 2025-09-03 00:42:18 | INFO  | Tasks are running in the background 2025-09-03 00:42:21.966164 | orchestrator | 2025-09-03 00:42:21 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-03 00:42:24.085787 | orchestrator | 2025-09-03 00:42:24 | INFO  | Task f2f4face-566f-47a6-816a-48a8d05984c3 is in state STARTED 2025-09-03 00:42:24.085901 | orchestrator | 2025-09-03 00:42:24 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:42:24.086325 | orchestrator | 2025-09-03 00:42:24 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:42:24.087434 | orchestrator | 2025-09-03 00:42:24 | INFO  | Task a5decbfc-e914-4d76-b917-96903faa9b48 is in state STARTED 2025-09-03 00:42:24.090522 | orchestrator | 2025-09-03 00:42:24 | INFO  | Task 7d924a8e-1e34-4611-a309-eb9322576bed is in state STARTED 2025-09-03 00:42:24.090896 | orchestrator | 2025-09-03 00:42:24 | INFO  | Task 48eda5c3-a457-44ae-b9cc-8417d9341f35 is in state STARTED 2025-09-03 00:42:24.091581 | orchestrator | 2025-09-03 00:42:24 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:42:24.091611 | orchestrator | 2025-09-03 00:42:24 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:42:27.191900 | orchestrator | 2025-09-03 00:42:27 | INFO  | Task f2f4face-566f-47a6-816a-48a8d05984c3 is in state STARTED 2025-09-03 00:42:27.192008 | orchestrator | 2025-09-03 00:42:27 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:42:27.192024 | orchestrator | 2025-09-03 00:42:27 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:42:27.192384 | orchestrator | 2025-09-03 00:42:27 | INFO  | Task a5decbfc-e914-4d76-b917-96903faa9b48 is in state STARTED 2025-09-03 00:42:27.192828 | orchestrator | 2025-09-03 00:42:27 | INFO  | Task 7d924a8e-1e34-4611-a309-eb9322576bed is in state STARTED 2025-09-03 00:42:27.195426 | orchestrator | 2025-09-03 00:42:27 | INFO  | Task 48eda5c3-a457-44ae-b9cc-8417d9341f35 is in state STARTED 2025-09-03 00:42:27.195994 | orchestrator | 2025-09-03 00:42:27 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:42:27.196016 | orchestrator | 2025-09-03 00:42:27 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:42:30.336773 | orchestrator | 2025-09-03 00:42:30 | INFO  | Task f2f4face-566f-47a6-816a-48a8d05984c3 is in state STARTED 2025-09-03 00:42:30.336867 | orchestrator | 2025-09-03 00:42:30 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:42:30.336877 | orchestrator | 2025-09-03 00:42:30 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:42:30.336884 | orchestrator | 2025-09-03 00:42:30 | INFO  | Task a5decbfc-e914-4d76-b917-96903faa9b48 is in state STARTED 2025-09-03 00:42:30.336890 | orchestrator | 2025-09-03 00:42:30 | INFO  | Task 7d924a8e-1e34-4611-a309-eb9322576bed is in state STARTED 2025-09-03 00:42:30.336897 | orchestrator | 2025-09-03 00:42:30 | INFO  | Task 48eda5c3-a457-44ae-b9cc-8417d9341f35 is in state STARTED 2025-09-03 00:42:30.336903 | orchestrator | 2025-09-03 00:42:30 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:42:30.336910 | orchestrator | 2025-09-03 00:42:30 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:42:33.272685 | orchestrator | 2025-09-03 00:42:33 | INFO  | Task f2f4face-566f-47a6-816a-48a8d05984c3 is in state STARTED 2025-09-03 00:42:33.272814 | orchestrator | 2025-09-03 00:42:33 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:42:33.272831 | orchestrator | 2025-09-03 00:42:33 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:42:33.277305 | orchestrator | 2025-09-03 00:42:33 | INFO  | Task a5decbfc-e914-4d76-b917-96903faa9b48 is in state STARTED 2025-09-03 00:42:33.277331 | orchestrator | 2025-09-03 00:42:33 | INFO  | Task 7d924a8e-1e34-4611-a309-eb9322576bed is in state STARTED 2025-09-03 00:42:33.283495 | orchestrator | 2025-09-03 00:42:33 | INFO  | Task 48eda5c3-a457-44ae-b9cc-8417d9341f35 is in state STARTED 2025-09-03 00:42:33.283571 | orchestrator | 2025-09-03 00:42:33 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:42:33.283586 | orchestrator | 2025-09-03 00:42:33 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:42:36.426455 | orchestrator | 2025-09-03 00:42:36 | INFO  | Task f2f4face-566f-47a6-816a-48a8d05984c3 is in state STARTED 2025-09-03 00:42:36.427041 | orchestrator | 2025-09-03 00:42:36 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:42:36.428274 | orchestrator | 2025-09-03 00:42:36 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:42:36.435584 | orchestrator | 2025-09-03 00:42:36 | INFO  | Task a5decbfc-e914-4d76-b917-96903faa9b48 is in state STARTED 2025-09-03 00:42:36.440037 | orchestrator | 2025-09-03 00:42:36 | INFO  | Task 7d924a8e-1e34-4611-a309-eb9322576bed is in state STARTED 2025-09-03 00:42:36.441324 | orchestrator | 2025-09-03 00:42:36 | INFO  | Task 48eda5c3-a457-44ae-b9cc-8417d9341f35 is in state STARTED 2025-09-03 00:42:36.442348 | orchestrator | 2025-09-03 00:42:36 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:42:36.442372 | orchestrator | 2025-09-03 00:42:36 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:42:39.535245 | orchestrator | 2025-09-03 00:42:39 | INFO  | Task f2f4face-566f-47a6-816a-48a8d05984c3 is in state STARTED 2025-09-03 00:42:39.535369 | orchestrator | 2025-09-03 00:42:39 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:42:39.535385 | orchestrator | 2025-09-03 00:42:39 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:42:39.535398 | orchestrator | 2025-09-03 00:42:39 | INFO  | Task a5decbfc-e914-4d76-b917-96903faa9b48 is in state STARTED 2025-09-03 00:42:39.535409 | orchestrator | 2025-09-03 00:42:39 | INFO  | Task 7d924a8e-1e34-4611-a309-eb9322576bed is in state STARTED 2025-09-03 00:42:39.535420 | orchestrator | 2025-09-03 00:42:39 | INFO  | Task 48eda5c3-a457-44ae-b9cc-8417d9341f35 is in state STARTED 2025-09-03 00:42:39.535432 | orchestrator | 2025-09-03 00:42:39 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:42:39.535443 | orchestrator | 2025-09-03 00:42:39 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:42:42.586822 | orchestrator | 2025-09-03 00:42:42 | INFO  | Task f2f4face-566f-47a6-816a-48a8d05984c3 is in state STARTED 2025-09-03 00:42:42.586988 | orchestrator | 2025-09-03 00:42:42 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:42:42.587014 | orchestrator | 2025-09-03 00:42:42 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:42:42.587027 | orchestrator | 2025-09-03 00:42:42 | INFO  | Task a5decbfc-e914-4d76-b917-96903faa9b48 is in state STARTED 2025-09-03 00:42:42.587065 | orchestrator | 2025-09-03 00:42:42 | INFO  | Task 7d924a8e-1e34-4611-a309-eb9322576bed is in state STARTED 2025-09-03 00:42:42.587115 | orchestrator | 2025-09-03 00:42:42 | INFO  | Task 48eda5c3-a457-44ae-b9cc-8417d9341f35 is in state STARTED 2025-09-03 00:42:42.587128 | orchestrator | 2025-09-03 00:42:42 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:42:42.587140 | orchestrator | 2025-09-03 00:42:42 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:42:45.751058 | orchestrator | 2025-09-03 00:42:45 | INFO  | Task f2f4face-566f-47a6-816a-48a8d05984c3 is in state STARTED 2025-09-03 00:42:45.758395 | orchestrator | 2025-09-03 00:42:45 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:42:45.758441 | orchestrator | 2025-09-03 00:42:45 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:42:45.759285 | orchestrator | 2025-09-03 00:42:45 | INFO  | Task a5decbfc-e914-4d76-b917-96903faa9b48 is in state STARTED 2025-09-03 00:42:45.760360 | orchestrator | 2025-09-03 00:42:45 | INFO  | Task 7d924a8e-1e34-4611-a309-eb9322576bed is in state STARTED 2025-09-03 00:42:45.762969 | orchestrator | 2025-09-03 00:42:45 | INFO  | Task 48eda5c3-a457-44ae-b9cc-8417d9341f35 is in state SUCCESS 2025-09-03 00:42:45.763065 | orchestrator | 2025-09-03 00:42:45.763122 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-03 00:42:45.763135 | orchestrator | 2025-09-03 00:42:45.763147 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-03 00:42:45.763159 | orchestrator | Wednesday 03 September 2025 00:42:32 +0000 (0:00:00.974) 0:00:00.974 *** 2025-09-03 00:42:45.763171 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:42:45.763185 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:42:45.763197 | orchestrator | changed: [testbed-manager] 2025-09-03 00:42:45.763208 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:42:45.763220 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:42:45.763230 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:42:45.763241 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:42:45.763253 | orchestrator | 2025-09-03 00:42:45.763264 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-03 00:42:45.763275 | orchestrator | Wednesday 03 September 2025 00:42:36 +0000 (0:00:03.976) 0:00:04.951 *** 2025-09-03 00:42:45.763286 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-03 00:42:45.763298 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-03 00:42:45.763309 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-03 00:42:45.763320 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-03 00:42:45.763331 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-03 00:42:45.763343 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-03 00:42:45.763354 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-03 00:42:45.763365 | orchestrator | 2025-09-03 00:42:45.763376 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-03 00:42:45.763388 | orchestrator | Wednesday 03 September 2025 00:42:38 +0000 (0:00:02.404) 0:00:07.356 *** 2025-09-03 00:42:45.763411 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-03 00:42:37.612570', 'end': '2025-09-03 00:42:37.621652', 'delta': '0:00:00.009082', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-03 00:42:45.763453 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-03 00:42:37.421215', 'end': '2025-09-03 00:42:37.428760', 'delta': '0:00:00.007545', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-03 00:42:45.763466 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-03 00:42:37.376754', 'end': '2025-09-03 00:42:37.385378', 'delta': '0:00:00.008624', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-03 00:42:45.763497 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-03 00:42:38.182882', 'end': '2025-09-03 00:42:38.191387', 'delta': '0:00:00.008505', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-03 00:42:45.763511 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-03 00:42:38.307807', 'end': '2025-09-03 00:42:38.314046', 'delta': '0:00:00.006239', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-03 00:42:45.763530 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-03 00:42:38.523086', 'end': '2025-09-03 00:42:38.532131', 'delta': '0:00:00.009045', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-03 00:42:45.763558 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-03 00:42:37.375911', 'end': '2025-09-03 00:42:37.380652', 'delta': '0:00:00.004741', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-03 00:42:45.763573 | orchestrator | 2025-09-03 00:42:45.763586 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-03 00:42:45.763599 | orchestrator | Wednesday 03 September 2025 00:42:40 +0000 (0:00:01.825) 0:00:09.181 *** 2025-09-03 00:42:45.763612 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-03 00:42:45.763625 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-03 00:42:45.763637 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-03 00:42:45.763650 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-03 00:42:45.763662 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-03 00:42:45.763824 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-03 00:42:45.763842 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-03 00:42:45.763856 | orchestrator | 2025-09-03 00:42:45.763869 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-03 00:42:45.763880 | orchestrator | Wednesday 03 September 2025 00:42:42 +0000 (0:00:02.399) 0:00:11.581 *** 2025-09-03 00:42:45.763891 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-03 00:42:45.763902 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-03 00:42:45.763913 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-03 00:42:45.763924 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-03 00:42:45.763935 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-03 00:42:45.763946 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-03 00:42:45.763966 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-03 00:42:45.763977 | orchestrator | 2025-09-03 00:42:45.763988 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:42:45.764000 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:42:45.764013 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:42:45.764024 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:42:45.764035 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:42:45.764046 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:42:45.764057 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:42:45.764098 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:42:45.764110 | orchestrator | 2025-09-03 00:42:45.764121 | orchestrator | 2025-09-03 00:42:45.764132 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:42:45.764144 | orchestrator | Wednesday 03 September 2025 00:42:44 +0000 (0:00:01.746) 0:00:13.328 *** 2025-09-03 00:42:45.764155 | orchestrator | =============================================================================== 2025-09-03 00:42:45.764165 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.98s 2025-09-03 00:42:45.764176 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.40s 2025-09-03 00:42:45.764187 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.40s 2025-09-03 00:42:45.764198 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.83s 2025-09-03 00:42:45.764214 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 1.75s 2025-09-03 00:42:45.764226 | orchestrator | 2025-09-03 00:42:45 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:42:45.764237 | orchestrator | 2025-09-03 00:42:45 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:42:48.841491 | orchestrator | 2025-09-03 00:42:48 | INFO  | Task f57c7551-bb69-4bd3-8ab7-670da2339df0 is in state STARTED 2025-09-03 00:42:48.846719 | orchestrator | 2025-09-03 00:42:48 | INFO  | Task f2f4face-566f-47a6-816a-48a8d05984c3 is in state STARTED 2025-09-03 00:42:48.849714 | orchestrator | 2025-09-03 00:42:48 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:42:48.852674 | orchestrator | 2025-09-03 00:42:48 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:42:48.853345 | orchestrator | 2025-09-03 00:42:48 | INFO  | Task a5decbfc-e914-4d76-b917-96903faa9b48 is in state STARTED 2025-09-03 00:42:48.857743 | orchestrator | 2025-09-03 00:42:48 | INFO  | Task 7d924a8e-1e34-4611-a309-eb9322576bed is in state STARTED 2025-09-03 00:42:48.858301 | orchestrator | 2025-09-03 00:42:48 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:42:48.859308 | orchestrator | 2025-09-03 00:42:48 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:42:52.059002 | orchestrator | 2025-09-03 00:42:51 | INFO  | Task f57c7551-bb69-4bd3-8ab7-670da2339df0 is in state STARTED 2025-09-03 00:42:52.059148 | orchestrator | 2025-09-03 00:42:51 | INFO  | Task f2f4face-566f-47a6-816a-48a8d05984c3 is in state STARTED 2025-09-03 00:42:52.059165 | orchestrator | 2025-09-03 00:42:51 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:42:52.059177 | orchestrator | 2025-09-03 00:42:51 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:42:52.059189 | orchestrator | 2025-09-03 00:42:51 | INFO  | Task a5decbfc-e914-4d76-b917-96903faa9b48 is in state STARTED 2025-09-03 00:42:52.059200 | orchestrator | 2025-09-03 00:42:51 | INFO  | Task 7d924a8e-1e34-4611-a309-eb9322576bed is in state STARTED 2025-09-03 00:42:52.059211 | orchestrator | 2025-09-03 00:42:51 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:42:52.059222 | orchestrator | 2025-09-03 00:42:51 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:42:54.975222 | orchestrator | 2025-09-03 00:42:54 | INFO  | Task f57c7551-bb69-4bd3-8ab7-670da2339df0 is in state STARTED 2025-09-03 00:42:54.975456 | orchestrator | 2025-09-03 00:42:54 | INFO  | Task f2f4face-566f-47a6-816a-48a8d05984c3 is in state STARTED 2025-09-03 00:42:54.976145 | orchestrator | 2025-09-03 00:42:54 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:42:54.976636 | orchestrator | 2025-09-03 00:42:54 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:42:54.977478 | orchestrator | 2025-09-03 00:42:54 | INFO  | Task a5decbfc-e914-4d76-b917-96903faa9b48 is in state STARTED 2025-09-03 00:42:54.978172 | orchestrator | 2025-09-03 00:42:54 | INFO  | Task 7d924a8e-1e34-4611-a309-eb9322576bed is in state STARTED 2025-09-03 00:42:54.978657 | orchestrator | 2025-09-03 00:42:54 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:42:54.978687 | orchestrator | 2025-09-03 00:42:54 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:42:58.008760 | orchestrator | 2025-09-03 00:42:58 | INFO  | Task f57c7551-bb69-4bd3-8ab7-670da2339df0 is in state STARTED 2025-09-03 00:42:58.009812 | orchestrator | 2025-09-03 00:42:58 | INFO  | Task f2f4face-566f-47a6-816a-48a8d05984c3 is in state STARTED 2025-09-03 00:42:58.011482 | orchestrator | 2025-09-03 00:42:58 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:42:58.013243 | orchestrator | 2025-09-03 00:42:58 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:42:58.020949 | orchestrator | 2025-09-03 00:42:58 | INFO  | Task a5decbfc-e914-4d76-b917-96903faa9b48 is in state STARTED 2025-09-03 00:42:58.021446 | orchestrator | 2025-09-03 00:42:58 | INFO  | Task 7d924a8e-1e34-4611-a309-eb9322576bed is in state STARTED 2025-09-03 00:42:58.022185 | orchestrator | 2025-09-03 00:42:58 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:42:58.023669 | orchestrator | 2025-09-03 00:42:58 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:43:01.192408 | orchestrator | 2025-09-03 00:43:01 | INFO  | Task f57c7551-bb69-4bd3-8ab7-670da2339df0 is in state STARTED 2025-09-03 00:43:01.192521 | orchestrator | 2025-09-03 00:43:01 | INFO  | Task f2f4face-566f-47a6-816a-48a8d05984c3 is in state STARTED 2025-09-03 00:43:01.192536 | orchestrator | 2025-09-03 00:43:01 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:43:01.192549 | orchestrator | 2025-09-03 00:43:01 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:43:01.192560 | orchestrator | 2025-09-03 00:43:01 | INFO  | Task a5decbfc-e914-4d76-b917-96903faa9b48 is in state STARTED 2025-09-03 00:43:01.192572 | orchestrator | 2025-09-03 00:43:01 | INFO  | Task 7d924a8e-1e34-4611-a309-eb9322576bed is in state STARTED 2025-09-03 00:43:01.192584 | orchestrator | 2025-09-03 00:43:01 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:43:01.192596 | orchestrator | 2025-09-03 00:43:01 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:43:04.150903 | orchestrator | 2025-09-03 00:43:04 | INFO  | Task f57c7551-bb69-4bd3-8ab7-670da2339df0 is in state STARTED 2025-09-03 00:43:04.151005 | orchestrator | 2025-09-03 00:43:04 | INFO  | Task f2f4face-566f-47a6-816a-48a8d05984c3 is in state STARTED 2025-09-03 00:43:04.151393 | orchestrator | 2025-09-03 00:43:04 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:43:04.152539 | orchestrator | 2025-09-03 00:43:04 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:43:04.152754 | orchestrator | 2025-09-03 00:43:04 | INFO  | Task a5decbfc-e914-4d76-b917-96903faa9b48 is in state STARTED 2025-09-03 00:43:04.153296 | orchestrator | 2025-09-03 00:43:04 | INFO  | Task 7d924a8e-1e34-4611-a309-eb9322576bed is in state STARTED 2025-09-03 00:43:04.153762 | orchestrator | 2025-09-03 00:43:04 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:43:04.153787 | orchestrator | 2025-09-03 00:43:04 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:43:07.241129 | orchestrator | 2025-09-03 00:43:07 | INFO  | Task f57c7551-bb69-4bd3-8ab7-670da2339df0 is in state STARTED 2025-09-03 00:43:07.241240 | orchestrator | 2025-09-03 00:43:07 | INFO  | Task f2f4face-566f-47a6-816a-48a8d05984c3 is in state STARTED 2025-09-03 00:43:07.241255 | orchestrator | 2025-09-03 00:43:07 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:43:07.241267 | orchestrator | 2025-09-03 00:43:07 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:43:07.241279 | orchestrator | 2025-09-03 00:43:07 | INFO  | Task a5decbfc-e914-4d76-b917-96903faa9b48 is in state STARTED 2025-09-03 00:43:07.241290 | orchestrator | 2025-09-03 00:43:07 | INFO  | Task 7d924a8e-1e34-4611-a309-eb9322576bed is in state SUCCESS 2025-09-03 00:43:07.241301 | orchestrator | 2025-09-03 00:43:07 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:43:07.241313 | orchestrator | 2025-09-03 00:43:07 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:43:10.501007 | orchestrator | 2025-09-03 00:43:10 | INFO  | Task f57c7551-bb69-4bd3-8ab7-670da2339df0 is in state STARTED 2025-09-03 00:43:10.501624 | orchestrator | 2025-09-03 00:43:10 | INFO  | Task f2f4face-566f-47a6-816a-48a8d05984c3 is in state STARTED 2025-09-03 00:43:10.502804 | orchestrator | 2025-09-03 00:43:10 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:43:10.503893 | orchestrator | 2025-09-03 00:43:10 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:43:10.505306 | orchestrator | 2025-09-03 00:43:10 | INFO  | Task a5decbfc-e914-4d76-b917-96903faa9b48 is in state STARTED 2025-09-03 00:43:10.506289 | orchestrator | 2025-09-03 00:43:10 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:43:10.508997 | orchestrator | 2025-09-03 00:43:10 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:43:13.545055 | orchestrator | 2025-09-03 00:43:13 | INFO  | Task f57c7551-bb69-4bd3-8ab7-670da2339df0 is in state STARTED 2025-09-03 00:43:13.545751 | orchestrator | 2025-09-03 00:43:13 | INFO  | Task f2f4face-566f-47a6-816a-48a8d05984c3 is in state STARTED 2025-09-03 00:43:13.546006 | orchestrator | 2025-09-03 00:43:13 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:43:13.546915 | orchestrator | 2025-09-03 00:43:13 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:43:13.547353 | orchestrator | 2025-09-03 00:43:13 | INFO  | Task a5decbfc-e914-4d76-b917-96903faa9b48 is in state SUCCESS 2025-09-03 00:43:13.547980 | orchestrator | 2025-09-03 00:43:13 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:43:13.548192 | orchestrator | 2025-09-03 00:43:13 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:43:16.593783 | orchestrator | 2025-09-03 00:43:16 | INFO  | Task f57c7551-bb69-4bd3-8ab7-670da2339df0 is in state STARTED 2025-09-03 00:43:16.593890 | orchestrator | 2025-09-03 00:43:16 | INFO  | Task f2f4face-566f-47a6-816a-48a8d05984c3 is in state STARTED 2025-09-03 00:43:16.594287 | orchestrator | 2025-09-03 00:43:16 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:43:16.594860 | orchestrator | 2025-09-03 00:43:16 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:43:16.595476 | orchestrator | 2025-09-03 00:43:16 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:43:16.595498 | orchestrator | 2025-09-03 00:43:16 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:43:19.649997 | orchestrator | 2025-09-03 00:43:19 | INFO  | Task f57c7551-bb69-4bd3-8ab7-670da2339df0 is in state STARTED 2025-09-03 00:43:19.650451 | orchestrator | 2025-09-03 00:43:19 | INFO  | Task f2f4face-566f-47a6-816a-48a8d05984c3 is in state STARTED 2025-09-03 00:43:19.652598 | orchestrator | 2025-09-03 00:43:19 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:43:19.653110 | orchestrator | 2025-09-03 00:43:19 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:43:19.653363 | orchestrator | 2025-09-03 00:43:19 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:43:19.653386 | orchestrator | 2025-09-03 00:43:19 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:43:22.697462 | orchestrator | 2025-09-03 00:43:22 | INFO  | Task f57c7551-bb69-4bd3-8ab7-670da2339df0 is in state STARTED 2025-09-03 00:43:22.698452 | orchestrator | 2025-09-03 00:43:22 | INFO  | Task f2f4face-566f-47a6-816a-48a8d05984c3 is in state STARTED 2025-09-03 00:43:22.700640 | orchestrator | 2025-09-03 00:43:22 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:43:22.702521 | orchestrator | 2025-09-03 00:43:22 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:43:22.705795 | orchestrator | 2025-09-03 00:43:22 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:43:22.706118 | orchestrator | 2025-09-03 00:43:22 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:43:25.757933 | orchestrator | 2025-09-03 00:43:25 | INFO  | Task f57c7551-bb69-4bd3-8ab7-670da2339df0 is in state STARTED 2025-09-03 00:43:25.760414 | orchestrator | 2025-09-03 00:43:25 | INFO  | Task f2f4face-566f-47a6-816a-48a8d05984c3 is in state STARTED 2025-09-03 00:43:25.766492 | orchestrator | 2025-09-03 00:43:25 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:43:25.768853 | orchestrator | 2025-09-03 00:43:25 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:43:25.770566 | orchestrator | 2025-09-03 00:43:25 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:43:25.770600 | orchestrator | 2025-09-03 00:43:25 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:43:28.827804 | orchestrator | 2025-09-03 00:43:28 | INFO  | Task f57c7551-bb69-4bd3-8ab7-670da2339df0 is in state STARTED 2025-09-03 00:43:28.828024 | orchestrator | 2025-09-03 00:43:28 | INFO  | Task f2f4face-566f-47a6-816a-48a8d05984c3 is in state STARTED 2025-09-03 00:43:28.828778 | orchestrator | 2025-09-03 00:43:28 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:43:28.829687 | orchestrator | 2025-09-03 00:43:28 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:43:28.830296 | orchestrator | 2025-09-03 00:43:28 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:43:28.830485 | orchestrator | 2025-09-03 00:43:28 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:43:31.876233 | orchestrator | 2025-09-03 00:43:31 | INFO  | Task f57c7551-bb69-4bd3-8ab7-670da2339df0 is in state STARTED 2025-09-03 00:43:31.876664 | orchestrator | 2025-09-03 00:43:31 | INFO  | Task f2f4face-566f-47a6-816a-48a8d05984c3 is in state STARTED 2025-09-03 00:43:31.878128 | orchestrator | 2025-09-03 00:43:31 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:43:31.878959 | orchestrator | 2025-09-03 00:43:31 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:43:31.879970 | orchestrator | 2025-09-03 00:43:31 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:43:31.879998 | orchestrator | 2025-09-03 00:43:31 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:43:34.910871 | orchestrator | 2025-09-03 00:43:34 | INFO  | Task f57c7551-bb69-4bd3-8ab7-670da2339df0 is in state STARTED 2025-09-03 00:43:34.911268 | orchestrator | 2025-09-03 00:43:34 | INFO  | Task f2f4face-566f-47a6-816a-48a8d05984c3 is in state SUCCESS 2025-09-03 00:43:34.913278 | orchestrator | 2025-09-03 00:43:34.913336 | orchestrator | 2025-09-03 00:43:34.913368 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-03 00:43:34.913382 | orchestrator | 2025-09-03 00:43:34.913394 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-03 00:43:34.913406 | orchestrator | Wednesday 03 September 2025 00:42:30 +0000 (0:00:00.715) 0:00:00.715 *** 2025-09-03 00:43:34.913417 | orchestrator | ok: [testbed-manager] => { 2025-09-03 00:43:34.913434 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-03 00:43:34.913456 | orchestrator | } 2025-09-03 00:43:34.913601 | orchestrator | 2025-09-03 00:43:34.913618 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-03 00:43:34.913629 | orchestrator | Wednesday 03 September 2025 00:42:30 +0000 (0:00:00.385) 0:00:01.101 *** 2025-09-03 00:43:34.913640 | orchestrator | ok: [testbed-manager] 2025-09-03 00:43:34.913652 | orchestrator | 2025-09-03 00:43:34.913663 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-03 00:43:34.913674 | orchestrator | Wednesday 03 September 2025 00:42:31 +0000 (0:00:00.825) 0:00:01.926 *** 2025-09-03 00:43:34.913685 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-03 00:43:34.913696 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-03 00:43:34.913707 | orchestrator | 2025-09-03 00:43:34.913717 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-03 00:43:34.913728 | orchestrator | Wednesday 03 September 2025 00:42:32 +0000 (0:00:00.980) 0:00:02.906 *** 2025-09-03 00:43:34.913739 | orchestrator | changed: [testbed-manager] 2025-09-03 00:43:34.913750 | orchestrator | 2025-09-03 00:43:34.913760 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-03 00:43:34.913771 | orchestrator | Wednesday 03 September 2025 00:42:34 +0000 (0:00:02.477) 0:00:05.384 *** 2025-09-03 00:43:34.913781 | orchestrator | changed: [testbed-manager] 2025-09-03 00:43:34.913792 | orchestrator | 2025-09-03 00:43:34.913803 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-03 00:43:34.913813 | orchestrator | Wednesday 03 September 2025 00:42:36 +0000 (0:00:01.265) 0:00:06.650 *** 2025-09-03 00:43:34.913824 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-03 00:43:34.913835 | orchestrator | ok: [testbed-manager] 2025-09-03 00:43:34.913846 | orchestrator | 2025-09-03 00:43:34.913857 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-03 00:43:34.913867 | orchestrator | Wednesday 03 September 2025 00:43:02 +0000 (0:00:26.617) 0:00:33.267 *** 2025-09-03 00:43:34.913878 | orchestrator | changed: [testbed-manager] 2025-09-03 00:43:34.913888 | orchestrator | 2025-09-03 00:43:34.913899 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:43:34.913910 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:43:34.913923 | orchestrator | 2025-09-03 00:43:34.913934 | orchestrator | 2025-09-03 00:43:34.913945 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:43:34.913975 | orchestrator | Wednesday 03 September 2025 00:43:05 +0000 (0:00:02.395) 0:00:35.663 *** 2025-09-03 00:43:34.913987 | orchestrator | =============================================================================== 2025-09-03 00:43:34.913997 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.62s 2025-09-03 00:43:34.914008 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.48s 2025-09-03 00:43:34.914114 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.40s 2025-09-03 00:43:34.914130 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.27s 2025-09-03 00:43:34.914141 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.98s 2025-09-03 00:43:34.914152 | orchestrator | osism.services.homer : Create traefik external network ------------------ 0.83s 2025-09-03 00:43:34.914163 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.39s 2025-09-03 00:43:34.914174 | orchestrator | 2025-09-03 00:43:34.914184 | orchestrator | 2025-09-03 00:43:34.914198 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-03 00:43:34.914210 | orchestrator | 2025-09-03 00:43:34.914223 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-03 00:43:34.914235 | orchestrator | Wednesday 03 September 2025 00:42:28 +0000 (0:00:00.686) 0:00:00.686 *** 2025-09-03 00:43:34.914248 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-03 00:43:34.914263 | orchestrator | 2025-09-03 00:43:34.914276 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-03 00:43:34.914288 | orchestrator | Wednesday 03 September 2025 00:42:29 +0000 (0:00:00.410) 0:00:01.096 *** 2025-09-03 00:43:34.914301 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-03 00:43:34.914313 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-03 00:43:34.914326 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-03 00:43:34.914339 | orchestrator | 2025-09-03 00:43:34.914352 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-03 00:43:34.914365 | orchestrator | Wednesday 03 September 2025 00:42:31 +0000 (0:00:01.711) 0:00:02.807 *** 2025-09-03 00:43:34.914377 | orchestrator | changed: [testbed-manager] 2025-09-03 00:43:34.914390 | orchestrator | 2025-09-03 00:43:34.914402 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-03 00:43:34.914415 | orchestrator | Wednesday 03 September 2025 00:42:33 +0000 (0:00:02.193) 0:00:05.001 *** 2025-09-03 00:43:34.914443 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-03 00:43:34.914457 | orchestrator | ok: [testbed-manager] 2025-09-03 00:43:34.914469 | orchestrator | 2025-09-03 00:43:34.914489 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-03 00:43:34.914502 | orchestrator | Wednesday 03 September 2025 00:43:04 +0000 (0:00:31.417) 0:00:36.419 *** 2025-09-03 00:43:34.914516 | orchestrator | changed: [testbed-manager] 2025-09-03 00:43:34.914530 | orchestrator | 2025-09-03 00:43:34.914543 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-03 00:43:34.914554 | orchestrator | Wednesday 03 September 2025 00:43:05 +0000 (0:00:00.951) 0:00:37.370 *** 2025-09-03 00:43:34.914565 | orchestrator | ok: [testbed-manager] 2025-09-03 00:43:34.914576 | orchestrator | 2025-09-03 00:43:34.914587 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-03 00:43:34.914599 | orchestrator | Wednesday 03 September 2025 00:43:06 +0000 (0:00:01.049) 0:00:38.420 *** 2025-09-03 00:43:34.914619 | orchestrator | changed: [testbed-manager] 2025-09-03 00:43:34.914801 | orchestrator | 2025-09-03 00:43:34.914824 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-03 00:43:34.914848 | orchestrator | Wednesday 03 September 2025 00:43:08 +0000 (0:00:02.101) 0:00:40.521 *** 2025-09-03 00:43:34.914859 | orchestrator | changed: [testbed-manager] 2025-09-03 00:43:34.914870 | orchestrator | 2025-09-03 00:43:34.914881 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-03 00:43:34.914891 | orchestrator | Wednesday 03 September 2025 00:43:10 +0000 (0:00:01.658) 0:00:42.180 *** 2025-09-03 00:43:34.914902 | orchestrator | changed: [testbed-manager] 2025-09-03 00:43:34.914913 | orchestrator | 2025-09-03 00:43:34.914924 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-03 00:43:34.914935 | orchestrator | Wednesday 03 September 2025 00:43:11 +0000 (0:00:00.912) 0:00:43.092 *** 2025-09-03 00:43:34.914946 | orchestrator | ok: [testbed-manager] 2025-09-03 00:43:34.914957 | orchestrator | 2025-09-03 00:43:34.914967 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:43:34.914978 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:43:34.914990 | orchestrator | 2025-09-03 00:43:34.915001 | orchestrator | 2025-09-03 00:43:34.915011 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:43:34.915022 | orchestrator | Wednesday 03 September 2025 00:43:11 +0000 (0:00:00.434) 0:00:43.527 *** 2025-09-03 00:43:34.915033 | orchestrator | =============================================================================== 2025-09-03 00:43:34.915044 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 31.42s 2025-09-03 00:43:34.915054 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.19s 2025-09-03 00:43:34.915087 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.10s 2025-09-03 00:43:34.915098 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.71s 2025-09-03 00:43:34.915109 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.66s 2025-09-03 00:43:34.915119 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.05s 2025-09-03 00:43:34.915130 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.95s 2025-09-03 00:43:34.915141 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.91s 2025-09-03 00:43:34.915152 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.43s 2025-09-03 00:43:34.915162 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.41s 2025-09-03 00:43:34.915173 | orchestrator | 2025-09-03 00:43:34.915184 | orchestrator | 2025-09-03 00:43:34.915195 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 00:43:34.915206 | orchestrator | 2025-09-03 00:43:34.915216 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 00:43:34.915227 | orchestrator | Wednesday 03 September 2025 00:42:30 +0000 (0:00:00.689) 0:00:00.690 *** 2025-09-03 00:43:34.915238 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-03 00:43:34.915249 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-03 00:43:34.915260 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-03 00:43:34.915271 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-03 00:43:34.915281 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-03 00:43:34.915292 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-03 00:43:34.915303 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-03 00:43:34.915314 | orchestrator | 2025-09-03 00:43:34.915325 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-03 00:43:34.915335 | orchestrator | 2025-09-03 00:43:34.915346 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-03 00:43:34.915357 | orchestrator | Wednesday 03 September 2025 00:42:32 +0000 (0:00:02.287) 0:00:02.977 *** 2025-09-03 00:43:34.915389 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:43:34.915403 | orchestrator | 2025-09-03 00:43:34.915414 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-03 00:43:34.915425 | orchestrator | Wednesday 03 September 2025 00:42:34 +0000 (0:00:01.884) 0:00:04.862 *** 2025-09-03 00:43:34.915436 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:43:34.915447 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:43:34.915458 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:43:34.915469 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:43:34.915480 | orchestrator | ok: [testbed-manager] 2025-09-03 00:43:34.915500 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:43:34.915512 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:43:34.915523 | orchestrator | 2025-09-03 00:43:34.915539 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-03 00:43:34.915551 | orchestrator | Wednesday 03 September 2025 00:42:36 +0000 (0:00:01.611) 0:00:06.475 *** 2025-09-03 00:43:34.915562 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:43:34.915573 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:43:34.915584 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:43:34.915595 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:43:34.915606 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:43:34.915617 | orchestrator | ok: [testbed-manager] 2025-09-03 00:43:34.915628 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:43:34.915639 | orchestrator | 2025-09-03 00:43:34.915650 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-03 00:43:34.915661 | orchestrator | Wednesday 03 September 2025 00:42:39 +0000 (0:00:03.373) 0:00:09.849 *** 2025-09-03 00:43:34.915672 | orchestrator | changed: [testbed-manager] 2025-09-03 00:43:34.915682 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:43:34.915694 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:43:34.915704 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:43:34.915716 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:43:34.915726 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:43:34.915737 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:43:34.915748 | orchestrator | 2025-09-03 00:43:34.915759 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-03 00:43:34.915770 | orchestrator | Wednesday 03 September 2025 00:42:41 +0000 (0:00:01.891) 0:00:11.741 *** 2025-09-03 00:43:34.915781 | orchestrator | changed: [testbed-manager] 2025-09-03 00:43:34.915792 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:43:34.915803 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:43:34.915814 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:43:34.915825 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:43:34.915836 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:43:34.915847 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:43:34.915857 | orchestrator | 2025-09-03 00:43:34.915869 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-03 00:43:34.915880 | orchestrator | Wednesday 03 September 2025 00:42:51 +0000 (0:00:09.580) 0:00:21.321 *** 2025-09-03 00:43:34.915891 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:43:34.915901 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:43:34.915913 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:43:34.915923 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:43:34.915934 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:43:34.915945 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:43:34.915957 | orchestrator | changed: [testbed-manager] 2025-09-03 00:43:34.915968 | orchestrator | 2025-09-03 00:43:34.915979 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-03 00:43:34.915989 | orchestrator | Wednesday 03 September 2025 00:43:15 +0000 (0:00:24.592) 0:00:45.914 *** 2025-09-03 00:43:34.916001 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:43:34.916020 | orchestrator | 2025-09-03 00:43:34.916032 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-03 00:43:34.916043 | orchestrator | Wednesday 03 September 2025 00:43:16 +0000 (0:00:01.256) 0:00:47.170 *** 2025-09-03 00:43:34.916054 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-03 00:43:34.916117 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-03 00:43:34.916130 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-03 00:43:34.916141 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-03 00:43:34.916152 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-03 00:43:34.916163 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-03 00:43:34.916174 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-03 00:43:34.916185 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-03 00:43:34.916195 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-03 00:43:34.916206 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-03 00:43:34.916217 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-03 00:43:34.916228 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-03 00:43:34.916239 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-03 00:43:34.916250 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-03 00:43:34.916261 | orchestrator | 2025-09-03 00:43:34.916272 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-03 00:43:34.916284 | orchestrator | Wednesday 03 September 2025 00:43:21 +0000 (0:00:04.363) 0:00:51.533 *** 2025-09-03 00:43:34.916295 | orchestrator | ok: [testbed-manager] 2025-09-03 00:43:34.916306 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:43:34.916317 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:43:34.916328 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:43:34.916339 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:43:34.916350 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:43:34.916360 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:43:34.916371 | orchestrator | 2025-09-03 00:43:34.916381 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-03 00:43:34.916390 | orchestrator | Wednesday 03 September 2025 00:43:22 +0000 (0:00:01.281) 0:00:52.815 *** 2025-09-03 00:43:34.916400 | orchestrator | changed: [testbed-manager] 2025-09-03 00:43:34.916410 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:43:34.916420 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:43:34.916430 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:43:34.916439 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:43:34.916449 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:43:34.916459 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:43:34.916469 | orchestrator | 2025-09-03 00:43:34.916478 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-03 00:43:34.916495 | orchestrator | Wednesday 03 September 2025 00:43:23 +0000 (0:00:01.380) 0:00:54.196 *** 2025-09-03 00:43:34.916505 | orchestrator | ok: [testbed-manager] 2025-09-03 00:43:34.916520 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:43:34.916530 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:43:34.916540 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:43:34.916549 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:43:34.916559 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:43:34.916569 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:43:34.916579 | orchestrator | 2025-09-03 00:43:34.916589 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-03 00:43:34.916599 | orchestrator | Wednesday 03 September 2025 00:43:25 +0000 (0:00:01.455) 0:00:55.652 *** 2025-09-03 00:43:34.916609 | orchestrator | ok: [testbed-manager] 2025-09-03 00:43:34.916625 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:43:34.916635 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:43:34.916645 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:43:34.916655 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:43:34.916664 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:43:34.916674 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:43:34.916684 | orchestrator | 2025-09-03 00:43:34.916694 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-03 00:43:34.916704 | orchestrator | Wednesday 03 September 2025 00:43:27 +0000 (0:00:02.259) 0:00:57.912 *** 2025-09-03 00:43:34.916714 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-03 00:43:34.916725 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:43:34.916735 | orchestrator | 2025-09-03 00:43:34.916745 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-03 00:43:34.916755 | orchestrator | Wednesday 03 September 2025 00:43:28 +0000 (0:00:01.085) 0:00:58.997 *** 2025-09-03 00:43:34.916765 | orchestrator | changed: [testbed-manager] 2025-09-03 00:43:34.916775 | orchestrator | 2025-09-03 00:43:34.916784 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-03 00:43:34.916794 | orchestrator | Wednesday 03 September 2025 00:43:30 +0000 (0:00:01.575) 0:01:00.572 *** 2025-09-03 00:43:34.916804 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:43:34.916814 | orchestrator | changed: [testbed-manager] 2025-09-03 00:43:34.916824 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:43:34.916834 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:43:34.916844 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:43:34.916854 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:43:34.916864 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:43:34.916873 | orchestrator | 2025-09-03 00:43:34.916883 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:43:34.916894 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:43:34.916904 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:43:34.916914 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:43:34.916924 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:43:34.916934 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:43:34.916944 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:43:34.916954 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:43:34.916963 | orchestrator | 2025-09-03 00:43:34.916973 | orchestrator | 2025-09-03 00:43:34.916983 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:43:34.916993 | orchestrator | Wednesday 03 September 2025 00:43:33 +0000 (0:00:03.439) 0:01:04.012 *** 2025-09-03 00:43:34.917003 | orchestrator | =============================================================================== 2025-09-03 00:43:34.917013 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 24.59s 2025-09-03 00:43:34.917023 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.58s 2025-09-03 00:43:34.917032 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.36s 2025-09-03 00:43:34.917048 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.44s 2025-09-03 00:43:34.917058 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.37s 2025-09-03 00:43:34.917080 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.29s 2025-09-03 00:43:34.917091 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.26s 2025-09-03 00:43:34.917100 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.89s 2025-09-03 00:43:34.917110 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.89s 2025-09-03 00:43:34.917120 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.61s 2025-09-03 00:43:34.917130 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.58s 2025-09-03 00:43:34.917145 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.46s 2025-09-03 00:43:34.917155 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.38s 2025-09-03 00:43:34.917165 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.28s 2025-09-03 00:43:34.917175 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.26s 2025-09-03 00:43:34.917185 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.09s 2025-09-03 00:43:34.917195 | orchestrator | 2025-09-03 00:43:34 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:43:34.917205 | orchestrator | 2025-09-03 00:43:34 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:43:34.917216 | orchestrator | 2025-09-03 00:43:34 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:43:34.917226 | orchestrator | 2025-09-03 00:43:34 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:43:37.944141 | orchestrator | 2025-09-03 00:43:37 | INFO  | Task f57c7551-bb69-4bd3-8ab7-670da2339df0 is in state SUCCESS 2025-09-03 00:43:37.947281 | orchestrator | 2025-09-03 00:43:37 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:43:37.947571 | orchestrator | 2025-09-03 00:43:37 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:43:37.950775 | orchestrator | 2025-09-03 00:43:37 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:43:37.950801 | orchestrator | 2025-09-03 00:43:37 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:43:40.991285 | orchestrator | 2025-09-03 00:43:40 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:43:40.992131 | orchestrator | 2025-09-03 00:43:40 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:43:40.993607 | orchestrator | 2025-09-03 00:43:40 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:43:40.993631 | orchestrator | 2025-09-03 00:43:40 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:43:44.038256 | orchestrator | 2025-09-03 00:43:44 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:43:44.040097 | orchestrator | 2025-09-03 00:43:44 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:43:44.040952 | orchestrator | 2025-09-03 00:43:44 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:43:44.041403 | orchestrator | 2025-09-03 00:43:44 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:43:47.085034 | orchestrator | 2025-09-03 00:43:47 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:43:47.085880 | orchestrator | 2025-09-03 00:43:47 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:43:47.087678 | orchestrator | 2025-09-03 00:43:47 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:43:47.087779 | orchestrator | 2025-09-03 00:43:47 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:43:50.134234 | orchestrator | 2025-09-03 00:43:50 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:43:50.136121 | orchestrator | 2025-09-03 00:43:50 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:43:50.139291 | orchestrator | 2025-09-03 00:43:50 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:43:50.139317 | orchestrator | 2025-09-03 00:43:50 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:43:53.177945 | orchestrator | 2025-09-03 00:43:53 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:43:53.178250 | orchestrator | 2025-09-03 00:43:53 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:43:53.178820 | orchestrator | 2025-09-03 00:43:53 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:43:53.178844 | orchestrator | 2025-09-03 00:43:53 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:43:56.225881 | orchestrator | 2025-09-03 00:43:56 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:43:56.228135 | orchestrator | 2025-09-03 00:43:56 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:43:56.229306 | orchestrator | 2025-09-03 00:43:56 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:43:56.229332 | orchestrator | 2025-09-03 00:43:56 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:43:59.279889 | orchestrator | 2025-09-03 00:43:59 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:43:59.280002 | orchestrator | 2025-09-03 00:43:59 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:43:59.289609 | orchestrator | 2025-09-03 00:43:59 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:43:59.289639 | orchestrator | 2025-09-03 00:43:59 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:44:02.333732 | orchestrator | 2025-09-03 00:44:02 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:44:02.337205 | orchestrator | 2025-09-03 00:44:02 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:44:02.337246 | orchestrator | 2025-09-03 00:44:02 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:44:02.337260 | orchestrator | 2025-09-03 00:44:02 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:44:05.381607 | orchestrator | 2025-09-03 00:44:05 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:44:05.383523 | orchestrator | 2025-09-03 00:44:05 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:44:05.383558 | orchestrator | 2025-09-03 00:44:05 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:44:05.383571 | orchestrator | 2025-09-03 00:44:05 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:44:08.423227 | orchestrator | 2025-09-03 00:44:08 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:44:08.427701 | orchestrator | 2025-09-03 00:44:08 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:44:08.427781 | orchestrator | 2025-09-03 00:44:08 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:44:08.427795 | orchestrator | 2025-09-03 00:44:08 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:44:11.466433 | orchestrator | 2025-09-03 00:44:11 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:44:11.466700 | orchestrator | 2025-09-03 00:44:11 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:44:11.467855 | orchestrator | 2025-09-03 00:44:11 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:44:11.467878 | orchestrator | 2025-09-03 00:44:11 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:44:14.498680 | orchestrator | 2025-09-03 00:44:14 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:44:14.499553 | orchestrator | 2025-09-03 00:44:14 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:44:14.500797 | orchestrator | 2025-09-03 00:44:14 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:44:14.500821 | orchestrator | 2025-09-03 00:44:14 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:44:17.560487 | orchestrator | 2025-09-03 00:44:17 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:44:17.560703 | orchestrator | 2025-09-03 00:44:17 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:44:17.561668 | orchestrator | 2025-09-03 00:44:17 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:44:17.561692 | orchestrator | 2025-09-03 00:44:17 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:44:20.601311 | orchestrator | 2025-09-03 00:44:20 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:44:20.602637 | orchestrator | 2025-09-03 00:44:20 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:44:20.603720 | orchestrator | 2025-09-03 00:44:20 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:44:20.603745 | orchestrator | 2025-09-03 00:44:20 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:44:23.644127 | orchestrator | 2025-09-03 00:44:23 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:44:23.645678 | orchestrator | 2025-09-03 00:44:23 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:44:23.647706 | orchestrator | 2025-09-03 00:44:23 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:44:23.647766 | orchestrator | 2025-09-03 00:44:23 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:44:26.695926 | orchestrator | 2025-09-03 00:44:26 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:44:26.697213 | orchestrator | 2025-09-03 00:44:26 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:44:26.699275 | orchestrator | 2025-09-03 00:44:26 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:44:26.699309 | orchestrator | 2025-09-03 00:44:26 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:44:29.744211 | orchestrator | 2025-09-03 00:44:29 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:44:29.745271 | orchestrator | 2025-09-03 00:44:29 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:44:29.746723 | orchestrator | 2025-09-03 00:44:29 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:44:29.747342 | orchestrator | 2025-09-03 00:44:29 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:44:32.792674 | orchestrator | 2025-09-03 00:44:32 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:44:32.792887 | orchestrator | 2025-09-03 00:44:32 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:44:32.795044 | orchestrator | 2025-09-03 00:44:32 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:44:32.795103 | orchestrator | 2025-09-03 00:44:32 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:44:35.838925 | orchestrator | 2025-09-03 00:44:35 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:44:35.840884 | orchestrator | 2025-09-03 00:44:35 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:44:35.841907 | orchestrator | 2025-09-03 00:44:35 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:44:35.841935 | orchestrator | 2025-09-03 00:44:35 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:44:38.888411 | orchestrator | 2025-09-03 00:44:38 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:44:38.888544 | orchestrator | 2025-09-03 00:44:38 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:44:38.890533 | orchestrator | 2025-09-03 00:44:38 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:44:38.890563 | orchestrator | 2025-09-03 00:44:38 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:44:41.916560 | orchestrator | 2025-09-03 00:44:41 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:44:41.918359 | orchestrator | 2025-09-03 00:44:41 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state STARTED 2025-09-03 00:44:41.919222 | orchestrator | 2025-09-03 00:44:41 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:44:41.919601 | orchestrator | 2025-09-03 00:44:41 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:44:44.959589 | orchestrator | 2025-09-03 00:44:44 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:44:44.964520 | orchestrator | 2025-09-03 00:44:44 | INFO  | Task bb7976c7-8a7e-42ea-bc83-03640aa5fdcc is in state SUCCESS 2025-09-03 00:44:44.968425 | orchestrator | 2025-09-03 00:44:44.968478 | orchestrator | 2025-09-03 00:44:44.968492 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-03 00:44:44.968504 | orchestrator | 2025-09-03 00:44:44.968516 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-03 00:44:44.968527 | orchestrator | Wednesday 03 September 2025 00:42:48 +0000 (0:00:00.288) 0:00:00.288 *** 2025-09-03 00:44:44.968539 | orchestrator | ok: [testbed-manager] 2025-09-03 00:44:44.968554 | orchestrator | 2025-09-03 00:44:44.968566 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-03 00:44:44.968577 | orchestrator | Wednesday 03 September 2025 00:42:49 +0000 (0:00:00.634) 0:00:00.922 *** 2025-09-03 00:44:44.968588 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-03 00:44:44.968600 | orchestrator | 2025-09-03 00:44:44.968611 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-03 00:44:44.968622 | orchestrator | Wednesday 03 September 2025 00:42:49 +0000 (0:00:00.395) 0:00:01.318 *** 2025-09-03 00:44:44.968634 | orchestrator | changed: [testbed-manager] 2025-09-03 00:44:44.968645 | orchestrator | 2025-09-03 00:44:44.968656 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-03 00:44:44.968690 | orchestrator | Wednesday 03 September 2025 00:42:50 +0000 (0:00:00.891) 0:00:02.210 *** 2025-09-03 00:44:44.968701 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-03 00:44:44.968713 | orchestrator | ok: [testbed-manager] 2025-09-03 00:44:44.968724 | orchestrator | 2025-09-03 00:44:44.968735 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-03 00:44:44.968754 | orchestrator | Wednesday 03 September 2025 00:43:31 +0000 (0:00:41.040) 0:00:43.250 *** 2025-09-03 00:44:44.968765 | orchestrator | changed: [testbed-manager] 2025-09-03 00:44:44.968776 | orchestrator | 2025-09-03 00:44:44.968787 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:44:44.968799 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:44:44.968812 | orchestrator | 2025-09-03 00:44:44.968823 | orchestrator | 2025-09-03 00:44:44.968834 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:44:44.968845 | orchestrator | Wednesday 03 September 2025 00:43:37 +0000 (0:00:05.708) 0:00:48.959 *** 2025-09-03 00:44:44.968856 | orchestrator | =============================================================================== 2025-09-03 00:44:44.968866 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 41.04s 2025-09-03 00:44:44.968877 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 5.71s 2025-09-03 00:44:44.968888 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 0.89s 2025-09-03 00:44:44.968899 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.63s 2025-09-03 00:44:44.968910 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.40s 2025-09-03 00:44:44.968921 | orchestrator | 2025-09-03 00:44:44.968932 | orchestrator | 2025-09-03 00:44:44.968943 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-03 00:44:44.968954 | orchestrator | 2025-09-03 00:44:44.968965 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-03 00:44:44.968976 | orchestrator | Wednesday 03 September 2025 00:42:23 +0000 (0:00:00.215) 0:00:00.215 *** 2025-09-03 00:44:44.968988 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:44:44.969000 | orchestrator | 2025-09-03 00:44:44.969011 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-03 00:44:44.969023 | orchestrator | Wednesday 03 September 2025 00:42:24 +0000 (0:00:01.118) 0:00:01.333 *** 2025-09-03 00:44:44.969036 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-03 00:44:44.969104 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-03 00:44:44.969118 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-03 00:44:44.969131 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-03 00:44:44.969144 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-03 00:44:44.969157 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-03 00:44:44.969169 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-03 00:44:44.969182 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-03 00:44:44.969194 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-03 00:44:44.969206 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-03 00:44:44.969219 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-03 00:44:44.969231 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-03 00:44:44.969254 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-03 00:44:44.969267 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-03 00:44:44.969279 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-03 00:44:44.969292 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-03 00:44:44.969318 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-03 00:44:44.969331 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-03 00:44:44.969344 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-03 00:44:44.969357 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-03 00:44:44.969370 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-03 00:44:44.969382 | orchestrator | 2025-09-03 00:44:44.969393 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-03 00:44:44.969404 | orchestrator | Wednesday 03 September 2025 00:42:28 +0000 (0:00:03.893) 0:00:05.227 *** 2025-09-03 00:44:44.969415 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:44:44.969427 | orchestrator | 2025-09-03 00:44:44.969439 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-03 00:44:44.969450 | orchestrator | Wednesday 03 September 2025 00:42:29 +0000 (0:00:01.078) 0:00:06.305 *** 2025-09-03 00:44:44.969471 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.969487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.969499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.969511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.969522 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.969548 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.969562 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.969573 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.969594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.969607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.969618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.969645 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.969671 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.969690 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.969702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.969718 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.969730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.969742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.969754 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.969772 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.969784 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.969795 | orchestrator | 2025-09-03 00:44:44.969806 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-03 00:44:44.969817 | orchestrator | Wednesday 03 September 2025 00:42:34 +0000 (0:00:05.371) 0:00:11.676 *** 2025-09-03 00:44:44.969842 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-03 00:44:44.969855 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.969866 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.969878 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:44:44.969891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-03 00:44:44.969909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.969927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.969939 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:44:44.969951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-03 00:44:44.969963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.969987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.969999 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:44:44.970011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-03 00:44:44.970106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.970119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.970130 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:44:44.970142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-03 00:44:44.970161 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-03 00:44:44.970173 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.970203 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.970215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.970227 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.970239 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:44:44.970250 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:44:44.970266 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-03 00:44:44.970278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.970295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.970307 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:44:44.970319 | orchestrator | 2025-09-03 00:44:44.970330 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-03 00:44:44.970341 | orchestrator | Wednesday 03 September 2025 00:42:35 +0000 (0:00:01.162) 0:00:12.839 *** 2025-09-03 00:44:44.970353 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-03 00:44:44.970365 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.970383 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.970395 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:44:44.970407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-03 00:44:44.970423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.970441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.970453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-03 00:44:44.970465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.970476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.970487 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:44:44.970499 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:44:44.970510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-03 00:44:44.970528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.970541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.970552 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:44:44.970568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-03 00:44:44.970587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.970599 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.970610 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:44:44.970622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-03 00:44:44.970633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.970650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.970662 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:44:44.970674 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-03 00:44:44.970691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.970713 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.970724 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:44:44.970736 | orchestrator | 2025-09-03 00:44:44.970747 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-03 00:44:44.970758 | orchestrator | Wednesday 03 September 2025 00:42:38 +0000 (0:00:02.593) 0:00:15.433 *** 2025-09-03 00:44:44.970770 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:44:44.970781 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:44:44.970792 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:44:44.970803 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:44:44.970815 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:44:44.970826 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:44:44.970837 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:44:44.970848 | orchestrator | 2025-09-03 00:44:44.970859 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-03 00:44:44.970870 | orchestrator | Wednesday 03 September 2025 00:42:39 +0000 (0:00:01.517) 0:00:16.950 *** 2025-09-03 00:44:44.970881 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:44:44.970892 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:44:44.970903 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:44:44.970914 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:44:44.970926 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:44:44.970937 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:44:44.970948 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:44:44.970959 | orchestrator | 2025-09-03 00:44:44.970970 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-03 00:44:44.970981 | orchestrator | Wednesday 03 September 2025 00:42:41 +0000 (0:00:01.473) 0:00:18.424 *** 2025-09-03 00:44:44.970993 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.971005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.971023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.971042 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.971114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.971126 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.971137 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.971149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.971161 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.971172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.971197 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.971210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.971221 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.971232 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.971244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.971255 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.971267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.971298 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.971315 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.971325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.971339 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.971349 | orchestrator | 2025-09-03 00:44:44.971359 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-03 00:44:44.971369 | orchestrator | Wednesday 03 September 2025 00:42:46 +0000 (0:00:04.863) 0:00:23.287 *** 2025-09-03 00:44:44.971379 | orchestrator | [WARNING]: Skipped 2025-09-03 00:44:44.971389 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-03 00:44:44.971399 | orchestrator | to this access issue: 2025-09-03 00:44:44.971409 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-03 00:44:44.971419 | orchestrator | directory 2025-09-03 00:44:44.971429 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-03 00:44:44.971439 | orchestrator | 2025-09-03 00:44:44.971449 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-03 00:44:44.971459 | orchestrator | Wednesday 03 September 2025 00:42:47 +0000 (0:00:01.132) 0:00:24.420 *** 2025-09-03 00:44:44.971468 | orchestrator | [WARNING]: Skipped 2025-09-03 00:44:44.971478 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-03 00:44:44.971488 | orchestrator | to this access issue: 2025-09-03 00:44:44.971498 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-03 00:44:44.971507 | orchestrator | directory 2025-09-03 00:44:44.971517 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-03 00:44:44.971527 | orchestrator | 2025-09-03 00:44:44.971537 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-03 00:44:44.971546 | orchestrator | Wednesday 03 September 2025 00:42:48 +0000 (0:00:01.184) 0:00:25.604 *** 2025-09-03 00:44:44.971556 | orchestrator | [WARNING]: Skipped 2025-09-03 00:44:44.971566 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-03 00:44:44.971575 | orchestrator | to this access issue: 2025-09-03 00:44:44.971585 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-03 00:44:44.971595 | orchestrator | directory 2025-09-03 00:44:44.971605 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-03 00:44:44.971614 | orchestrator | 2025-09-03 00:44:44.971624 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-03 00:44:44.971640 | orchestrator | Wednesday 03 September 2025 00:42:49 +0000 (0:00:00.721) 0:00:26.326 *** 2025-09-03 00:44:44.971650 | orchestrator | [WARNING]: Skipped 2025-09-03 00:44:44.971660 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-03 00:44:44.971669 | orchestrator | to this access issue: 2025-09-03 00:44:44.971679 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-03 00:44:44.971689 | orchestrator | directory 2025-09-03 00:44:44.971699 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-03 00:44:44.971708 | orchestrator | 2025-09-03 00:44:44.971718 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-03 00:44:44.971727 | orchestrator | Wednesday 03 September 2025 00:42:50 +0000 (0:00:00.734) 0:00:27.060 *** 2025-09-03 00:44:44.971737 | orchestrator | changed: [testbed-manager] 2025-09-03 00:44:44.971747 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:44:44.971757 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:44:44.971766 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:44:44.971776 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:44:44.971786 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:44:44.971795 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:44:44.971805 | orchestrator | 2025-09-03 00:44:44.971815 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-03 00:44:44.971824 | orchestrator | Wednesday 03 September 2025 00:42:53 +0000 (0:00:03.666) 0:00:30.726 *** 2025-09-03 00:44:44.971834 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-03 00:44:44.971844 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-03 00:44:44.971854 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-03 00:44:44.971868 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-03 00:44:44.971879 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-03 00:44:44.971888 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-03 00:44:44.971898 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-03 00:44:44.971907 | orchestrator | 2025-09-03 00:44:44.971917 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-03 00:44:44.971927 | orchestrator | Wednesday 03 September 2025 00:42:56 +0000 (0:00:02.505) 0:00:33.232 *** 2025-09-03 00:44:44.971937 | orchestrator | changed: [testbed-manager] 2025-09-03 00:44:44.971947 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:44:44.971957 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:44:44.971966 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:44:44.971976 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:44:44.971986 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:44:44.971995 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:44:44.972005 | orchestrator | 2025-09-03 00:44:44.972015 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-03 00:44:44.972024 | orchestrator | Wednesday 03 September 2025 00:42:59 +0000 (0:00:02.867) 0:00:36.100 *** 2025-09-03 00:44:44.972038 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.972064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.972081 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.972092 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.972102 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.972121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.972132 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.972150 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.972160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.972176 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.972186 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.972196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.972207 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.972222 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.972233 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.972243 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.972257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.972273 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.972283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:44:44.972294 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.972304 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.972314 | orchestrator | 2025-09-03 00:44:44.972324 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-03 00:44:44.972334 | orchestrator | Wednesday 03 September 2025 00:43:02 +0000 (0:00:03.135) 0:00:39.235 *** 2025-09-03 00:44:44.972344 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-03 00:44:44.972354 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-03 00:44:44.972364 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-03 00:44:44.972382 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-03 00:44:44.972392 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-03 00:44:44.972402 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-03 00:44:44.972411 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-03 00:44:44.972421 | orchestrator | 2025-09-03 00:44:44.972431 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-03 00:44:44.972441 | orchestrator | Wednesday 03 September 2025 00:43:05 +0000 (0:00:03.643) 0:00:42.879 *** 2025-09-03 00:44:44.972450 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-03 00:44:44.972468 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-03 00:44:44.972478 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-03 00:44:44.972488 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-03 00:44:44.972498 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-03 00:44:44.972507 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-03 00:44:44.972524 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-03 00:44:44.972534 | orchestrator | 2025-09-03 00:44:44.972544 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-03 00:44:44.972554 | orchestrator | Wednesday 03 September 2025 00:43:08 +0000 (0:00:02.838) 0:00:45.718 *** 2025-09-03 00:44:44.972564 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.972574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.972585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.972595 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.972606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.972622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.972642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.972653 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.972663 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.972673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.972683 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.972694 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-03 00:44:44.972709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.972725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.972735 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.972749 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.972760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.972770 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.972780 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.972790 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.972800 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:44:44.972818 | orchestrator | 2025-09-03 00:44:44.972834 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-03 00:44:44.972844 | orchestrator | Wednesday 03 September 2025 00:43:12 +0000 (0:00:03.997) 0:00:49.715 *** 2025-09-03 00:44:44.972854 | orchestrator | changed: [testbed-manager] 2025-09-03 00:44:44.972864 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:44:44.972873 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:44:44.972883 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:44:44.972893 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:44:44.972903 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:44:44.972912 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:44:44.972922 | orchestrator | 2025-09-03 00:44:44.972932 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-03 00:44:44.972942 | orchestrator | Wednesday 03 September 2025 00:43:13 +0000 (0:00:01.311) 0:00:51.027 *** 2025-09-03 00:44:44.972951 | orchestrator | changed: [testbed-manager] 2025-09-03 00:44:44.972961 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:44:44.972971 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:44:44.972980 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:44:44.972990 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:44:44.973000 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:44:44.973009 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:44:44.973019 | orchestrator | 2025-09-03 00:44:44.973029 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-03 00:44:44.973039 | orchestrator | Wednesday 03 September 2025 00:43:15 +0000 (0:00:01.463) 0:00:52.490 *** 2025-09-03 00:44:44.973063 | orchestrator | 2025-09-03 00:44:44.973073 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-03 00:44:44.973083 | orchestrator | Wednesday 03 September 2025 00:43:15 +0000 (0:00:00.067) 0:00:52.557 *** 2025-09-03 00:44:44.973093 | orchestrator | 2025-09-03 00:44:44.973107 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-03 00:44:44.973117 | orchestrator | Wednesday 03 September 2025 00:43:15 +0000 (0:00:00.073) 0:00:52.631 *** 2025-09-03 00:44:44.973126 | orchestrator | 2025-09-03 00:44:44.973136 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-03 00:44:44.973146 | orchestrator | Wednesday 03 September 2025 00:43:15 +0000 (0:00:00.068) 0:00:52.700 *** 2025-09-03 00:44:44.973155 | orchestrator | 2025-09-03 00:44:44.973165 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-03 00:44:44.973175 | orchestrator | Wednesday 03 September 2025 00:43:15 +0000 (0:00:00.175) 0:00:52.876 *** 2025-09-03 00:44:44.973185 | orchestrator | 2025-09-03 00:44:44.973194 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-03 00:44:44.973204 | orchestrator | Wednesday 03 September 2025 00:43:15 +0000 (0:00:00.058) 0:00:52.934 *** 2025-09-03 00:44:44.973213 | orchestrator | 2025-09-03 00:44:44.973223 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-03 00:44:44.973233 | orchestrator | Wednesday 03 September 2025 00:43:15 +0000 (0:00:00.059) 0:00:52.993 *** 2025-09-03 00:44:44.973243 | orchestrator | 2025-09-03 00:44:44.973252 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-03 00:44:44.973262 | orchestrator | Wednesday 03 September 2025 00:43:16 +0000 (0:00:00.082) 0:00:53.075 *** 2025-09-03 00:44:44.973272 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:44:44.973282 | orchestrator | changed: [testbed-manager] 2025-09-03 00:44:44.973291 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:44:44.973301 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:44:44.973311 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:44:44.973321 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:44:44.973331 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:44:44.973346 | orchestrator | 2025-09-03 00:44:44.973356 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-03 00:44:44.973366 | orchestrator | Wednesday 03 September 2025 00:43:57 +0000 (0:00:41.060) 0:01:34.136 *** 2025-09-03 00:44:44.973375 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:44:44.973385 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:44:44.973395 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:44:44.973405 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:44:44.973414 | orchestrator | changed: [testbed-manager] 2025-09-03 00:44:44.973424 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:44:44.973433 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:44:44.973443 | orchestrator | 2025-09-03 00:44:44.973453 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-03 00:44:44.973462 | orchestrator | Wednesday 03 September 2025 00:44:32 +0000 (0:00:35.291) 0:02:09.427 *** 2025-09-03 00:44:44.973472 | orchestrator | ok: [testbed-manager] 2025-09-03 00:44:44.973482 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:44:44.973492 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:44:44.973502 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:44:44.973512 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:44:44.973521 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:44:44.973531 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:44:44.973541 | orchestrator | 2025-09-03 00:44:44.973550 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-03 00:44:44.973560 | orchestrator | Wednesday 03 September 2025 00:44:34 +0000 (0:00:02.204) 0:02:11.632 *** 2025-09-03 00:44:44.973570 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:44:44.973579 | orchestrator | changed: [testbed-manager] 2025-09-03 00:44:44.973589 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:44:44.973599 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:44:44.973608 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:44:44.973618 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:44:44.973628 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:44:44.973638 | orchestrator | 2025-09-03 00:44:44.973647 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:44:44.973658 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-03 00:44:44.973669 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-03 00:44:44.973684 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-03 00:44:44.973694 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-03 00:44:44.973704 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-03 00:44:44.973714 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-03 00:44:44.973724 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-03 00:44:44.973734 | orchestrator | 2025-09-03 00:44:44.973744 | orchestrator | 2025-09-03 00:44:44.973753 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:44:44.973763 | orchestrator | Wednesday 03 September 2025 00:44:44 +0000 (0:00:09.550) 0:02:21.182 *** 2025-09-03 00:44:44.973773 | orchestrator | =============================================================================== 2025-09-03 00:44:44.973783 | orchestrator | common : Restart fluentd container ------------------------------------- 41.06s 2025-09-03 00:44:44.973798 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 35.29s 2025-09-03 00:44:44.973808 | orchestrator | common : Restart cron container ----------------------------------------- 9.55s 2025-09-03 00:44:44.973817 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.37s 2025-09-03 00:44:44.973827 | orchestrator | common : Copying over config.json files for services -------------------- 4.86s 2025-09-03 00:44:44.973837 | orchestrator | common : Check common containers ---------------------------------------- 4.00s 2025-09-03 00:44:44.973846 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.89s 2025-09-03 00:44:44.973856 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.67s 2025-09-03 00:44:44.973866 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.64s 2025-09-03 00:44:44.973875 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.14s 2025-09-03 00:44:44.973885 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.87s 2025-09-03 00:44:44.973895 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.84s 2025-09-03 00:44:44.973904 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.59s 2025-09-03 00:44:44.973914 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.51s 2025-09-03 00:44:44.973928 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.20s 2025-09-03 00:44:44.973938 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.52s 2025-09-03 00:44:44.973948 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.47s 2025-09-03 00:44:44.973957 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.46s 2025-09-03 00:44:44.973967 | orchestrator | common : Creating log volume -------------------------------------------- 1.31s 2025-09-03 00:44:44.973977 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.18s 2025-09-03 00:44:44.973986 | orchestrator | 2025-09-03 00:44:44 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:44:44.973996 | orchestrator | 2025-09-03 00:44:44 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:44:48.027332 | orchestrator | 2025-09-03 00:44:48 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:44:48.027434 | orchestrator | 2025-09-03 00:44:48 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:44:48.027450 | orchestrator | 2025-09-03 00:44:48 | INFO  | Task 694c8273-af16-40fd-825e-ede60e23172d is in state STARTED 2025-09-03 00:44:48.029815 | orchestrator | 2025-09-03 00:44:48 | INFO  | Task 63774016-02d7-4c16-9f28-a394c4d3b666 is in state STARTED 2025-09-03 00:44:48.030462 | orchestrator | 2025-09-03 00:44:48 | INFO  | Task 1f742b31-8445-4e4a-9ad9-ea355e00e948 is in state STARTED 2025-09-03 00:44:48.030916 | orchestrator | 2025-09-03 00:44:48 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:44:48.031155 | orchestrator | 2025-09-03 00:44:48 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:44:51.054509 | orchestrator | 2025-09-03 00:44:51 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:44:51.054830 | orchestrator | 2025-09-03 00:44:51 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:44:51.055366 | orchestrator | 2025-09-03 00:44:51 | INFO  | Task 694c8273-af16-40fd-825e-ede60e23172d is in state STARTED 2025-09-03 00:44:51.056001 | orchestrator | 2025-09-03 00:44:51 | INFO  | Task 63774016-02d7-4c16-9f28-a394c4d3b666 is in state STARTED 2025-09-03 00:44:51.056603 | orchestrator | 2025-09-03 00:44:51 | INFO  | Task 1f742b31-8445-4e4a-9ad9-ea355e00e948 is in state STARTED 2025-09-03 00:44:51.057337 | orchestrator | 2025-09-03 00:44:51 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:44:51.057424 | orchestrator | 2025-09-03 00:44:51 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:44:54.087135 | orchestrator | 2025-09-03 00:44:54 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:44:54.087312 | orchestrator | 2025-09-03 00:44:54 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:44:54.087678 | orchestrator | 2025-09-03 00:44:54 | INFO  | Task 694c8273-af16-40fd-825e-ede60e23172d is in state STARTED 2025-09-03 00:44:54.089156 | orchestrator | 2025-09-03 00:44:54 | INFO  | Task 63774016-02d7-4c16-9f28-a394c4d3b666 is in state STARTED 2025-09-03 00:44:54.089533 | orchestrator | 2025-09-03 00:44:54 | INFO  | Task 1f742b31-8445-4e4a-9ad9-ea355e00e948 is in state STARTED 2025-09-03 00:44:54.090326 | orchestrator | 2025-09-03 00:44:54 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:44:54.090354 | orchestrator | 2025-09-03 00:44:54 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:44:57.119707 | orchestrator | 2025-09-03 00:44:57 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:44:57.119824 | orchestrator | 2025-09-03 00:44:57 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:44:57.120255 | orchestrator | 2025-09-03 00:44:57 | INFO  | Task 694c8273-af16-40fd-825e-ede60e23172d is in state STARTED 2025-09-03 00:44:57.120989 | orchestrator | 2025-09-03 00:44:57 | INFO  | Task 63774016-02d7-4c16-9f28-a394c4d3b666 is in state STARTED 2025-09-03 00:44:57.121567 | orchestrator | 2025-09-03 00:44:57 | INFO  | Task 1f742b31-8445-4e4a-9ad9-ea355e00e948 is in state STARTED 2025-09-03 00:44:57.122350 | orchestrator | 2025-09-03 00:44:57 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:44:57.122381 | orchestrator | 2025-09-03 00:44:57 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:45:00.151718 | orchestrator | 2025-09-03 00:45:00 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:45:00.151940 | orchestrator | 2025-09-03 00:45:00 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:45:00.152700 | orchestrator | 2025-09-03 00:45:00 | INFO  | Task 694c8273-af16-40fd-825e-ede60e23172d is in state STARTED 2025-09-03 00:45:00.153688 | orchestrator | 2025-09-03 00:45:00 | INFO  | Task 63774016-02d7-4c16-9f28-a394c4d3b666 is in state STARTED 2025-09-03 00:45:00.154147 | orchestrator | 2025-09-03 00:45:00 | INFO  | Task 1f742b31-8445-4e4a-9ad9-ea355e00e948 is in state STARTED 2025-09-03 00:45:00.154814 | orchestrator | 2025-09-03 00:45:00 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:45:00.154839 | orchestrator | 2025-09-03 00:45:00 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:45:03.186664 | orchestrator | 2025-09-03 00:45:03 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:45:03.186779 | orchestrator | 2025-09-03 00:45:03 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:45:03.187287 | orchestrator | 2025-09-03 00:45:03 | INFO  | Task 694c8273-af16-40fd-825e-ede60e23172d is in state STARTED 2025-09-03 00:45:03.188256 | orchestrator | 2025-09-03 00:45:03 | INFO  | Task 63774016-02d7-4c16-9f28-a394c4d3b666 is in state STARTED 2025-09-03 00:45:03.189636 | orchestrator | 2025-09-03 00:45:03 | INFO  | Task 1f742b31-8445-4e4a-9ad9-ea355e00e948 is in state STARTED 2025-09-03 00:45:03.190384 | orchestrator | 2025-09-03 00:45:03 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:45:03.191188 | orchestrator | 2025-09-03 00:45:03 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:45:06.242211 | orchestrator | 2025-09-03 00:45:06 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:45:06.242428 | orchestrator | 2025-09-03 00:45:06 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:45:06.243089 | orchestrator | 2025-09-03 00:45:06 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:45:06.243629 | orchestrator | 2025-09-03 00:45:06 | INFO  | Task 694c8273-af16-40fd-825e-ede60e23172d is in state SUCCESS 2025-09-03 00:45:06.244238 | orchestrator | 2025-09-03 00:45:06 | INFO  | Task 63774016-02d7-4c16-9f28-a394c4d3b666 is in state STARTED 2025-09-03 00:45:06.244871 | orchestrator | 2025-09-03 00:45:06 | INFO  | Task 1f742b31-8445-4e4a-9ad9-ea355e00e948 is in state STARTED 2025-09-03 00:45:06.245692 | orchestrator | 2025-09-03 00:45:06 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:45:06.245714 | orchestrator | 2025-09-03 00:45:06 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:45:09.376639 | orchestrator | 2025-09-03 00:45:09 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:45:09.376728 | orchestrator | 2025-09-03 00:45:09 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:45:09.378340 | orchestrator | 2025-09-03 00:45:09 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:45:09.378369 | orchestrator | 2025-09-03 00:45:09 | INFO  | Task 63774016-02d7-4c16-9f28-a394c4d3b666 is in state STARTED 2025-09-03 00:45:09.378381 | orchestrator | 2025-09-03 00:45:09 | INFO  | Task 1f742b31-8445-4e4a-9ad9-ea355e00e948 is in state STARTED 2025-09-03 00:45:09.378393 | orchestrator | 2025-09-03 00:45:09 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:45:09.378421 | orchestrator | 2025-09-03 00:45:09 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:45:12.447928 | orchestrator | 2025-09-03 00:45:12 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:45:12.451622 | orchestrator | 2025-09-03 00:45:12 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:45:12.452591 | orchestrator | 2025-09-03 00:45:12 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:45:12.453755 | orchestrator | 2025-09-03 00:45:12 | INFO  | Task 63774016-02d7-4c16-9f28-a394c4d3b666 is in state STARTED 2025-09-03 00:45:12.454960 | orchestrator | 2025-09-03 00:45:12 | INFO  | Task 1f742b31-8445-4e4a-9ad9-ea355e00e948 is in state STARTED 2025-09-03 00:45:12.455740 | orchestrator | 2025-09-03 00:45:12 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:45:12.456100 | orchestrator | 2025-09-03 00:45:12 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:45:15.538802 | orchestrator | 2025-09-03 00:45:15.538899 | orchestrator | 2025-09-03 00:45:15.538914 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 00:45:15.538927 | orchestrator | 2025-09-03 00:45:15.538938 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 00:45:15.538950 | orchestrator | Wednesday 03 September 2025 00:44:51 +0000 (0:00:00.210) 0:00:00.210 *** 2025-09-03 00:45:15.538962 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:45:15.538975 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:45:15.538987 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:45:15.538998 | orchestrator | 2025-09-03 00:45:15.539028 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 00:45:15.539078 | orchestrator | Wednesday 03 September 2025 00:44:51 +0000 (0:00:00.289) 0:00:00.500 *** 2025-09-03 00:45:15.539091 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-03 00:45:15.539102 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-03 00:45:15.539113 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-03 00:45:15.539124 | orchestrator | 2025-09-03 00:45:15.539135 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-03 00:45:15.539146 | orchestrator | 2025-09-03 00:45:15.539157 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-03 00:45:15.539167 | orchestrator | Wednesday 03 September 2025 00:44:52 +0000 (0:00:00.511) 0:00:01.011 *** 2025-09-03 00:45:15.539178 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:45:15.539190 | orchestrator | 2025-09-03 00:45:15.539201 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-03 00:45:15.539212 | orchestrator | Wednesday 03 September 2025 00:44:52 +0000 (0:00:00.656) 0:00:01.668 *** 2025-09-03 00:45:15.539222 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-03 00:45:15.539233 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-03 00:45:15.539245 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-03 00:45:15.539256 | orchestrator | 2025-09-03 00:45:15.539267 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-03 00:45:15.539277 | orchestrator | Wednesday 03 September 2025 00:44:53 +0000 (0:00:00.848) 0:00:02.516 *** 2025-09-03 00:45:15.539288 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-03 00:45:15.539299 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-03 00:45:15.539310 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-03 00:45:15.539321 | orchestrator | 2025-09-03 00:45:15.539331 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-03 00:45:15.539344 | orchestrator | Wednesday 03 September 2025 00:44:55 +0000 (0:00:01.800) 0:00:04.317 *** 2025-09-03 00:45:15.539357 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:45:15.539370 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:45:15.539383 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:45:15.539396 | orchestrator | 2025-09-03 00:45:15.539409 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-03 00:45:15.539422 | orchestrator | Wednesday 03 September 2025 00:44:57 +0000 (0:00:01.483) 0:00:05.800 *** 2025-09-03 00:45:15.539434 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:45:15.539447 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:45:15.539460 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:45:15.539472 | orchestrator | 2025-09-03 00:45:15.539485 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:45:15.539498 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:45:15.539512 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:45:15.539524 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:45:15.539537 | orchestrator | 2025-09-03 00:45:15.539549 | orchestrator | 2025-09-03 00:45:15.539562 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:45:15.539575 | orchestrator | Wednesday 03 September 2025 00:45:03 +0000 (0:00:06.768) 0:00:12.569 *** 2025-09-03 00:45:15.539587 | orchestrator | =============================================================================== 2025-09-03 00:45:15.539600 | orchestrator | memcached : Restart memcached container --------------------------------- 6.77s 2025-09-03 00:45:15.539621 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.80s 2025-09-03 00:45:15.539646 | orchestrator | memcached : Check memcached container ----------------------------------- 1.48s 2025-09-03 00:45:15.539660 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.85s 2025-09-03 00:45:15.539672 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.66s 2025-09-03 00:45:15.539685 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2025-09-03 00:45:15.539698 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-09-03 00:45:15.539710 | orchestrator | 2025-09-03 00:45:15.539723 | orchestrator | 2025-09-03 00:45:15.539734 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 00:45:15.539745 | orchestrator | 2025-09-03 00:45:15.539755 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 00:45:15.539766 | orchestrator | Wednesday 03 September 2025 00:44:51 +0000 (0:00:00.396) 0:00:00.396 *** 2025-09-03 00:45:15.539777 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:45:15.539788 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:45:15.539799 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:45:15.539810 | orchestrator | 2025-09-03 00:45:15.539821 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 00:45:15.539847 | orchestrator | Wednesday 03 September 2025 00:44:51 +0000 (0:00:00.412) 0:00:00.809 *** 2025-09-03 00:45:15.539858 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-03 00:45:15.539869 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-03 00:45:15.539880 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-03 00:45:15.539891 | orchestrator | 2025-09-03 00:45:15.539903 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-03 00:45:15.539913 | orchestrator | 2025-09-03 00:45:15.539924 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-03 00:45:15.539935 | orchestrator | Wednesday 03 September 2025 00:44:52 +0000 (0:00:00.514) 0:00:01.323 *** 2025-09-03 00:45:15.539946 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:45:15.539957 | orchestrator | 2025-09-03 00:45:15.539968 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-03 00:45:15.539979 | orchestrator | Wednesday 03 September 2025 00:44:52 +0000 (0:00:00.580) 0:00:01.903 *** 2025-09-03 00:45:15.539992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-03 00:45:15.540008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-03 00:45:15.540020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-03 00:45:15.540054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-03 00:45:15.540072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-03 00:45:15.540093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-03 00:45:15.540105 | orchestrator | 2025-09-03 00:45:15.540116 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-03 00:45:15.540127 | orchestrator | Wednesday 03 September 2025 00:44:54 +0000 (0:00:01.284) 0:00:03.188 *** 2025-09-03 00:45:15.540139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-03 00:45:15.540151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-03 00:45:15.540162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-03 00:45:15.540180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-03 00:45:15.540196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-03 00:45:15.540214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-03 00:45:15.540226 | orchestrator | 2025-09-03 00:45:15.540237 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-03 00:45:15.540248 | orchestrator | Wednesday 03 September 2025 00:44:57 +0000 (0:00:02.989) 0:00:06.178 *** 2025-09-03 00:45:15.540259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-03 00:45:15.540271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-03 00:45:15.540282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-03 00:45:15.540300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-03 00:45:15.540316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-03 00:45:15.540327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-03 00:45:15.540338 | orchestrator | 2025-09-03 00:45:15.540354 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-03 00:45:15.540366 | orchestrator | Wednesday 03 September 2025 00:44:59 +0000 (0:00:02.759) 0:00:08.938 *** 2025-09-03 00:45:15.540378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-03 00:45:15.540389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-03 00:45:15.540400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-03 00:45:15.540418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-03 00:45:15.540430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-03 00:45:15.540445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-03 00:45:15.540457 | orchestrator | 2025-09-03 00:45:15.540468 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-03 00:45:15.540479 | orchestrator | Wednesday 03 September 2025 00:45:01 +0000 (0:00:01.604) 0:00:10.542 *** 2025-09-03 00:45:15.540490 | orchestrator | 2025-09-03 00:45:15.540501 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-03 00:45:15.540518 | orchestrator | Wednesday 03 September 2025 00:45:01 +0000 (0:00:00.088) 0:00:10.630 *** 2025-09-03 00:45:15.540529 | orchestrator | 2025-09-03 00:45:15.540540 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-03 00:45:15.540550 | orchestrator | Wednesday 03 September 2025 00:45:01 +0000 (0:00:00.061) 0:00:10.692 *** 2025-09-03 00:45:15.540561 | orchestrator | 2025-09-03 00:45:15.540572 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-03 00:45:15.540583 | orchestrator | Wednesday 03 September 2025 00:45:01 +0000 (0:00:00.059) 0:00:10.751 *** 2025-09-03 00:45:15.540594 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:45:15.540605 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:45:15.540616 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:45:15.540627 | orchestrator | 2025-09-03 00:45:15.540638 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-03 00:45:15.540649 | orchestrator | Wednesday 03 September 2025 00:45:04 +0000 (0:00:03.210) 0:00:13.962 *** 2025-09-03 00:45:15.540659 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:45:15.540670 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:45:15.540688 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:45:15.540699 | orchestrator | 2025-09-03 00:45:15.540710 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:45:15.540721 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:45:15.540732 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:45:15.540743 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:45:15.540754 | orchestrator | 2025-09-03 00:45:15.540765 | orchestrator | 2025-09-03 00:45:15.540775 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:45:15.540786 | orchestrator | Wednesday 03 September 2025 00:45:14 +0000 (0:00:09.456) 0:00:23.419 *** 2025-09-03 00:45:15.540797 | orchestrator | =============================================================================== 2025-09-03 00:45:15.540808 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.46s 2025-09-03 00:45:15.540819 | orchestrator | redis : Restart redis container ----------------------------------------- 3.21s 2025-09-03 00:45:15.540830 | orchestrator | redis : Copying over default config.json files -------------------------- 2.99s 2025-09-03 00:45:15.540841 | orchestrator | redis : Copying over redis config files --------------------------------- 2.76s 2025-09-03 00:45:15.540851 | orchestrator | redis : Check redis containers ------------------------------------------ 1.60s 2025-09-03 00:45:15.540862 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.28s 2025-09-03 00:45:15.540873 | orchestrator | redis : include_tasks --------------------------------------------------- 0.58s 2025-09-03 00:45:15.540884 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2025-09-03 00:45:15.540894 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.41s 2025-09-03 00:45:15.540905 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.21s 2025-09-03 00:45:15.540916 | orchestrator | 2025-09-03 00:45:15 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:45:15.540927 | orchestrator | 2025-09-03 00:45:15 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:45:15.540938 | orchestrator | 2025-09-03 00:45:15 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:45:15.540950 | orchestrator | 2025-09-03 00:45:15 | INFO  | Task 63774016-02d7-4c16-9f28-a394c4d3b666 is in state STARTED 2025-09-03 00:45:15.540961 | orchestrator | 2025-09-03 00:45:15 | INFO  | Task 1f742b31-8445-4e4a-9ad9-ea355e00e948 is in state SUCCESS 2025-09-03 00:45:15.540972 | orchestrator | 2025-09-03 00:45:15 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:45:15.540983 | orchestrator | 2025-09-03 00:45:15 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:45:18.545589 | orchestrator | 2025-09-03 00:45:18 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:45:18.545693 | orchestrator | 2025-09-03 00:45:18 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:45:18.545869 | orchestrator | 2025-09-03 00:45:18 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:45:18.546467 | orchestrator | 2025-09-03 00:45:18 | INFO  | Task 63774016-02d7-4c16-9f28-a394c4d3b666 is in state STARTED 2025-09-03 00:45:18.551603 | orchestrator | 2025-09-03 00:45:18 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:45:18.551631 | orchestrator | 2025-09-03 00:45:18 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:45:21.633206 | orchestrator | 2025-09-03 00:45:21 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:45:21.633305 | orchestrator | 2025-09-03 00:45:21 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:45:21.634760 | orchestrator | 2025-09-03 00:45:21 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:45:21.635583 | orchestrator | 2025-09-03 00:45:21 | INFO  | Task 63774016-02d7-4c16-9f28-a394c4d3b666 is in state STARTED 2025-09-03 00:45:21.638377 | orchestrator | 2025-09-03 00:45:21 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:45:21.638395 | orchestrator | 2025-09-03 00:45:21 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:45:24.664096 | orchestrator | 2025-09-03 00:45:24 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:45:24.664243 | orchestrator | 2025-09-03 00:45:24 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:45:24.665650 | orchestrator | 2025-09-03 00:45:24 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:45:24.665682 | orchestrator | 2025-09-03 00:45:24 | INFO  | Task 63774016-02d7-4c16-9f28-a394c4d3b666 is in state STARTED 2025-09-03 00:45:24.665892 | orchestrator | 2025-09-03 00:45:24 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:45:24.665908 | orchestrator | 2025-09-03 00:45:24 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:45:27.696790 | orchestrator | 2025-09-03 00:45:27 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:45:27.698800 | orchestrator | 2025-09-03 00:45:27 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:45:27.698836 | orchestrator | 2025-09-03 00:45:27 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:45:27.698848 | orchestrator | 2025-09-03 00:45:27 | INFO  | Task 63774016-02d7-4c16-9f28-a394c4d3b666 is in state STARTED 2025-09-03 00:45:27.698860 | orchestrator | 2025-09-03 00:45:27 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:45:27.698871 | orchestrator | 2025-09-03 00:45:27 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:45:30.741675 | orchestrator | 2025-09-03 00:45:30 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:45:30.741793 | orchestrator | 2025-09-03 00:45:30 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:45:30.741807 | orchestrator | 2025-09-03 00:45:30 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:45:30.741818 | orchestrator | 2025-09-03 00:45:30 | INFO  | Task 63774016-02d7-4c16-9f28-a394c4d3b666 is in state STARTED 2025-09-03 00:45:30.741827 | orchestrator | 2025-09-03 00:45:30 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:45:30.741837 | orchestrator | 2025-09-03 00:45:30 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:45:33.756711 | orchestrator | 2025-09-03 00:45:33 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:45:33.756825 | orchestrator | 2025-09-03 00:45:33 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:45:33.756842 | orchestrator | 2025-09-03 00:45:33 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:45:33.757004 | orchestrator | 2025-09-03 00:45:33 | INFO  | Task 63774016-02d7-4c16-9f28-a394c4d3b666 is in state STARTED 2025-09-03 00:45:33.757623 | orchestrator | 2025-09-03 00:45:33 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:45:33.757647 | orchestrator | 2025-09-03 00:45:33 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:45:36.791824 | orchestrator | 2025-09-03 00:45:36 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:45:36.792096 | orchestrator | 2025-09-03 00:45:36 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:45:36.793626 | orchestrator | 2025-09-03 00:45:36 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:45:36.798593 | orchestrator | 2025-09-03 00:45:36 | INFO  | Task 63774016-02d7-4c16-9f28-a394c4d3b666 is in state STARTED 2025-09-03 00:45:36.802482 | orchestrator | 2025-09-03 00:45:36 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:45:36.802750 | orchestrator | 2025-09-03 00:45:36 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:45:39.902998 | orchestrator | 2025-09-03 00:45:39 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:45:39.903272 | orchestrator | 2025-09-03 00:45:39 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:45:39.905118 | orchestrator | 2025-09-03 00:45:39 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:45:39.906362 | orchestrator | 2025-09-03 00:45:39 | INFO  | Task 63774016-02d7-4c16-9f28-a394c4d3b666 is in state STARTED 2025-09-03 00:45:39.907778 | orchestrator | 2025-09-03 00:45:39 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:45:39.908111 | orchestrator | 2025-09-03 00:45:39 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:45:42.961929 | orchestrator | 2025-09-03 00:45:42 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:45:42.963425 | orchestrator | 2025-09-03 00:45:42 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:45:42.964330 | orchestrator | 2025-09-03 00:45:42 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:45:42.966723 | orchestrator | 2025-09-03 00:45:42 | INFO  | Task 63774016-02d7-4c16-9f28-a394c4d3b666 is in state STARTED 2025-09-03 00:45:42.967539 | orchestrator | 2025-09-03 00:45:42 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:45:42.967710 | orchestrator | 2025-09-03 00:45:42 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:45:46.040599 | orchestrator | 2025-09-03 00:45:46 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:45:46.044174 | orchestrator | 2025-09-03 00:45:46 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:45:46.048106 | orchestrator | 2025-09-03 00:45:46 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:45:46.049121 | orchestrator | 2025-09-03 00:45:46 | INFO  | Task 63774016-02d7-4c16-9f28-a394c4d3b666 is in state STARTED 2025-09-03 00:45:46.050221 | orchestrator | 2025-09-03 00:45:46 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:45:46.050245 | orchestrator | 2025-09-03 00:45:46 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:45:49.128959 | orchestrator | 2025-09-03 00:45:49 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:45:49.129103 | orchestrator | 2025-09-03 00:45:49 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:45:49.129119 | orchestrator | 2025-09-03 00:45:49 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:45:49.129153 | orchestrator | 2025-09-03 00:45:49 | INFO  | Task 63774016-02d7-4c16-9f28-a394c4d3b666 is in state STARTED 2025-09-03 00:45:49.129165 | orchestrator | 2025-09-03 00:45:49 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:45:49.129176 | orchestrator | 2025-09-03 00:45:49 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:45:52.147100 | orchestrator | 2025-09-03 00:45:52 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:45:52.147321 | orchestrator | 2025-09-03 00:45:52 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:45:52.148300 | orchestrator | 2025-09-03 00:45:52 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:45:52.149280 | orchestrator | 2025-09-03 00:45:52 | INFO  | Task 63774016-02d7-4c16-9f28-a394c4d3b666 is in state STARTED 2025-09-03 00:45:52.150247 | orchestrator | 2025-09-03 00:45:52 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:45:52.150286 | orchestrator | 2025-09-03 00:45:52 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:45:55.397877 | orchestrator | 2025-09-03 00:45:55 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:45:55.397947 | orchestrator | 2025-09-03 00:45:55 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:45:55.398140 | orchestrator | 2025-09-03 00:45:55 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:45:55.398996 | orchestrator | 2025-09-03 00:45:55 | INFO  | Task 63774016-02d7-4c16-9f28-a394c4d3b666 is in state SUCCESS 2025-09-03 00:45:55.400580 | orchestrator | 2025-09-03 00:45:55.400605 | orchestrator | 2025-09-03 00:45:55.400613 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 00:45:55.400618 | orchestrator | 2025-09-03 00:45:55.400622 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 00:45:55.400627 | orchestrator | Wednesday 03 September 2025 00:44:51 +0000 (0:00:00.290) 0:00:00.290 *** 2025-09-03 00:45:55.400631 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:45:55.400637 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:45:55.400641 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:45:55.400645 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:45:55.400649 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:45:55.400653 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:45:55.400657 | orchestrator | 2025-09-03 00:45:55.400661 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 00:45:55.400665 | orchestrator | Wednesday 03 September 2025 00:44:52 +0000 (0:00:00.940) 0:00:01.230 *** 2025-09-03 00:45:55.400669 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-03 00:45:55.400673 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-03 00:45:55.400677 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-03 00:45:55.400681 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-03 00:45:55.400685 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-03 00:45:55.400689 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-03 00:45:55.400693 | orchestrator | 2025-09-03 00:45:55.400697 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-03 00:45:55.400700 | orchestrator | 2025-09-03 00:45:55.400704 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-03 00:45:55.400708 | orchestrator | Wednesday 03 September 2025 00:44:53 +0000 (0:00:00.925) 0:00:02.156 *** 2025-09-03 00:45:55.400726 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:45:55.400732 | orchestrator | 2025-09-03 00:45:55.400735 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-03 00:45:55.400739 | orchestrator | Wednesday 03 September 2025 00:44:54 +0000 (0:00:01.549) 0:00:03.706 *** 2025-09-03 00:45:55.400743 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-03 00:45:55.400747 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-03 00:45:55.400751 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-03 00:45:55.400755 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-03 00:45:55.400758 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-03 00:45:55.400763 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-03 00:45:55.400766 | orchestrator | 2025-09-03 00:45:55.400770 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-03 00:45:55.400774 | orchestrator | Wednesday 03 September 2025 00:44:56 +0000 (0:00:01.185) 0:00:04.891 *** 2025-09-03 00:45:55.400778 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-03 00:45:55.400782 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-03 00:45:55.400786 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-03 00:45:55.400789 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-03 00:45:55.400793 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-03 00:45:55.400797 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-03 00:45:55.400801 | orchestrator | 2025-09-03 00:45:55.400804 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-03 00:45:55.400808 | orchestrator | Wednesday 03 September 2025 00:44:57 +0000 (0:00:01.431) 0:00:06.323 *** 2025-09-03 00:45:55.400812 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-03 00:45:55.400816 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:45:55.400820 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-03 00:45:55.400824 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:45:55.400828 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-03 00:45:55.400832 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:45:55.400835 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-03 00:45:55.400839 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:45:55.400843 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-03 00:45:55.400847 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:45:55.400851 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-03 00:45:55.400854 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:45:55.400858 | orchestrator | 2025-09-03 00:45:55.400862 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-03 00:45:55.400866 | orchestrator | Wednesday 03 September 2025 00:44:58 +0000 (0:00:01.248) 0:00:07.572 *** 2025-09-03 00:45:55.400873 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:45:55.400877 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:45:55.400881 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:45:55.400885 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:45:55.400889 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:45:55.400892 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:45:55.400896 | orchestrator | 2025-09-03 00:45:55.400900 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-03 00:45:55.400904 | orchestrator | Wednesday 03 September 2025 00:44:59 +0000 (0:00:00.674) 0:00:08.247 *** 2025-09-03 00:45:55.400916 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-03 00:45:55.400927 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-03 00:45:55.400931 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-03 00:45:55.400935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-03 00:45:55.400940 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-03 00:45:55.400946 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-03 00:45:55.400956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-03 00:45:55.400960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-03 00:45:55.400964 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-03 00:45:55.400968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-03 00:45:55.400972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-03 00:45:55.400980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-03 00:45:55.400988 | orchestrator | 2025-09-03 00:45:55.400992 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-03 00:45:55.400996 | orchestrator | Wednesday 03 September 2025 00:45:00 +0000 (0:00:01.472) 0:00:09.720 *** 2025-09-03 00:45:55.401000 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-03 00:45:55.401004 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-03 00:45:55.401008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-03 00:45:55.401012 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-03 00:45:55.401018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-03 00:45:55.401050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-03 00:45:55.401054 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-03 00:45:55.401058 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-03 00:45:55.401062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-03 00:45:55.401066 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-03 00:45:55.401072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-03 00:45:55.401082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-03 00:45:55.401086 | orchestrator | 2025-09-03 00:45:55.401090 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-03 00:45:55.401094 | orchestrator | Wednesday 03 September 2025 00:45:03 +0000 (0:00:02.550) 0:00:12.270 *** 2025-09-03 00:45:55.401098 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:45:55.401102 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:45:55.401106 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:45:55.401109 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:45:55.401113 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:45:55.401117 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:45:55.401121 | orchestrator | 2025-09-03 00:45:55.401125 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-03 00:45:55.401129 | orchestrator | Wednesday 03 September 2025 00:45:04 +0000 (0:00:01.406) 0:00:13.677 *** 2025-09-03 00:45:55.401133 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-03 00:45:55.401137 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-03 00:45:55.401141 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-03 00:45:55.401149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-03 00:45:55.401156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-03 00:45:55.401161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-03 00:45:55.401165 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-03 00:45:55.401169 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-03 00:45:55.401173 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-03 00:45:55.401182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-03 00:45:55.401189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-03 00:45:55.401194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-03 00:45:55.401198 | orchestrator | 2025-09-03 00:45:55.401202 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-03 00:45:55.401206 | orchestrator | Wednesday 03 September 2025 00:45:07 +0000 (0:00:02.879) 0:00:16.556 *** 2025-09-03 00:45:55.401209 | orchestrator | 2025-09-03 00:45:55.401213 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-03 00:45:55.401217 | orchestrator | Wednesday 03 September 2025 00:45:08 +0000 (0:00:00.671) 0:00:17.227 *** 2025-09-03 00:45:55.401221 | orchestrator | 2025-09-03 00:45:55.401225 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-03 00:45:55.401229 | orchestrator | Wednesday 03 September 2025 00:45:08 +0000 (0:00:00.354) 0:00:17.582 *** 2025-09-03 00:45:55.401232 | orchestrator | 2025-09-03 00:45:55.401236 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-03 00:45:55.401240 | orchestrator | Wednesday 03 September 2025 00:45:09 +0000 (0:00:00.332) 0:00:17.914 *** 2025-09-03 00:45:55.401244 | orchestrator | 2025-09-03 00:45:55.401248 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-03 00:45:55.401252 | orchestrator | Wednesday 03 September 2025 00:45:09 +0000 (0:00:00.182) 0:00:18.096 *** 2025-09-03 00:45:55.401255 | orchestrator | 2025-09-03 00:45:55.401259 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-03 00:45:55.401267 | orchestrator | Wednesday 03 September 2025 00:45:09 +0000 (0:00:00.142) 0:00:18.239 *** 2025-09-03 00:45:55.401270 | orchestrator | 2025-09-03 00:45:55.401274 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-03 00:45:55.401278 | orchestrator | Wednesday 03 September 2025 00:45:09 +0000 (0:00:00.256) 0:00:18.495 *** 2025-09-03 00:45:55.401282 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:45:55.401286 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:45:55.401290 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:45:55.401293 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:45:55.401297 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:45:55.401301 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:45:55.401305 | orchestrator | 2025-09-03 00:45:55.401309 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-03 00:45:55.401312 | orchestrator | Wednesday 03 September 2025 00:45:17 +0000 (0:00:07.679) 0:00:26.175 *** 2025-09-03 00:45:55.401316 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:45:55.401320 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:45:55.401324 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:45:55.401328 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:45:55.401332 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:45:55.401335 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:45:55.401339 | orchestrator | 2025-09-03 00:45:55.401343 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-03 00:45:55.401347 | orchestrator | Wednesday 03 September 2025 00:45:18 +0000 (0:00:01.337) 0:00:27.512 *** 2025-09-03 00:45:55.401351 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:45:55.401355 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:45:55.401358 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:45:55.401362 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:45:55.401366 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:45:55.401370 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:45:55.401373 | orchestrator | 2025-09-03 00:45:55.401377 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-03 00:45:55.401381 | orchestrator | Wednesday 03 September 2025 00:45:27 +0000 (0:00:09.057) 0:00:36.569 *** 2025-09-03 00:45:55.401386 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-03 00:45:55.401390 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-03 00:45:55.401394 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-03 00:45:55.401398 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-03 00:45:55.401402 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-03 00:45:55.401408 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-03 00:45:55.401412 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-03 00:45:55.401416 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-03 00:45:55.401420 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-03 00:45:55.401424 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-03 00:45:55.401428 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-03 00:45:55.401431 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-03 00:45:55.401435 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-03 00:45:55.401442 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-03 00:45:55.401446 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-03 00:45:55.401450 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-03 00:45:55.401454 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-03 00:45:55.401457 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-03 00:45:55.401461 | orchestrator | 2025-09-03 00:45:55.401465 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-03 00:45:55.401469 | orchestrator | Wednesday 03 September 2025 00:45:35 +0000 (0:00:07.418) 0:00:43.988 *** 2025-09-03 00:45:55.401473 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-03 00:45:55.401477 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-03 00:45:55.401481 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:45:55.401484 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:45:55.401488 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-03 00:45:55.401492 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:45:55.401496 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-03 00:45:55.401500 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-03 00:45:55.401504 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-03 00:45:55.401507 | orchestrator | 2025-09-03 00:45:55.401511 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-03 00:45:55.401515 | orchestrator | Wednesday 03 September 2025 00:45:38 +0000 (0:00:02.978) 0:00:46.967 *** 2025-09-03 00:45:55.401519 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-03 00:45:55.401523 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-03 00:45:55.401527 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:45:55.401531 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-03 00:45:55.401535 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:45:55.401538 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:45:55.401542 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-03 00:45:55.401546 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-03 00:45:55.401550 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-03 00:45:55.401554 | orchestrator | 2025-09-03 00:45:55.401558 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-03 00:45:55.401561 | orchestrator | Wednesday 03 September 2025 00:45:43 +0000 (0:00:04.991) 0:00:51.958 *** 2025-09-03 00:45:55.401565 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:45:55.401569 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:45:55.401573 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:45:55.401577 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:45:55.401580 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:45:55.401584 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:45:55.401588 | orchestrator | 2025-09-03 00:45:55.401592 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:45:55.401596 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-03 00:45:55.401602 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-03 00:45:55.401606 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-03 00:45:55.401612 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-03 00:45:55.401616 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-03 00:45:55.401622 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-03 00:45:55.401626 | orchestrator | 2025-09-03 00:45:55.401630 | orchestrator | 2025-09-03 00:45:55.401634 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:45:55.401637 | orchestrator | Wednesday 03 September 2025 00:45:52 +0000 (0:00:09.121) 0:01:01.080 *** 2025-09-03 00:45:55.401641 | orchestrator | =============================================================================== 2025-09-03 00:45:55.401645 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.18s 2025-09-03 00:45:55.401649 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 7.68s 2025-09-03 00:45:55.401653 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.42s 2025-09-03 00:45:55.401656 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.99s 2025-09-03 00:45:55.401660 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.98s 2025-09-03 00:45:55.401664 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.88s 2025-09-03 00:45:55.401668 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.55s 2025-09-03 00:45:55.401671 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.94s 2025-09-03 00:45:55.401675 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.55s 2025-09-03 00:45:55.401679 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.47s 2025-09-03 00:45:55.401683 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.43s 2025-09-03 00:45:55.401686 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.41s 2025-09-03 00:45:55.401690 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.34s 2025-09-03 00:45:55.401694 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.25s 2025-09-03 00:45:55.401698 | orchestrator | module-load : Load modules ---------------------------------------------- 1.19s 2025-09-03 00:45:55.401702 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.94s 2025-09-03 00:45:55.401705 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.93s 2025-09-03 00:45:55.401709 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.67s 2025-09-03 00:45:55.401713 | orchestrator | 2025-09-03 00:45:55 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:45:55.401717 | orchestrator | 2025-09-03 00:45:55 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:45:55.401721 | orchestrator | 2025-09-03 00:45:55 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:45:58.568824 | orchestrator | 2025-09-03 00:45:58 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:45:58.611746 | orchestrator | 2025-09-03 00:45:58 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:45:58.611801 | orchestrator | 2025-09-03 00:45:58 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:45:58.611813 | orchestrator | 2025-09-03 00:45:58 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:45:58.611824 | orchestrator | 2025-09-03 00:45:58 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:45:58.611861 | orchestrator | 2025-09-03 00:45:58 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:46:02.002491 | orchestrator | 2025-09-03 00:46:02 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:46:02.003726 | orchestrator | 2025-09-03 00:46:02 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state STARTED 2025-09-03 00:46:02.004338 | orchestrator | 2025-09-03 00:46:02 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:46:02.007523 | orchestrator | 2025-09-03 00:46:02 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:46:02.009685 | orchestrator | 2025-09-03 00:46:02 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:46:02.009732 | orchestrator | 2025-09-03 00:46:02 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:46:05.087450 | orchestrator | 2025-09-03 00:46:05 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:46:05.088437 | orchestrator | 2025-09-03 00:46:05.088473 | orchestrator | 2025-09-03 00:46:05 | INFO  | Task ce9c6c65-bb8a-4f40-8472-279e4d7da719 is in state SUCCESS 2025-09-03 00:46:05.090472 | orchestrator | 2025-09-03 00:46:05.090525 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-03 00:46:05.090546 | orchestrator | 2025-09-03 00:46:05.090566 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-03 00:46:05.090585 | orchestrator | Wednesday 03 September 2025 00:42:23 +0000 (0:00:00.173) 0:00:00.173 *** 2025-09-03 00:46:05.090605 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:46:05.090626 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:46:05.090646 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:46:05.090659 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:46:05.090671 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:46:05.090682 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:46:05.090693 | orchestrator | 2025-09-03 00:46:05.090705 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-03 00:46:05.090716 | orchestrator | Wednesday 03 September 2025 00:42:24 +0000 (0:00:00.729) 0:00:00.903 *** 2025-09-03 00:46:05.090728 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:46:05.090740 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:46:05.090752 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:46:05.090764 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.090775 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:46:05.090786 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:46:05.090797 | orchestrator | 2025-09-03 00:46:05.090809 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-03 00:46:05.090820 | orchestrator | Wednesday 03 September 2025 00:42:24 +0000 (0:00:00.602) 0:00:01.505 *** 2025-09-03 00:46:05.090832 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:46:05.090843 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:46:05.090854 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:46:05.090865 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.090889 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:46:05.090901 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:46:05.090912 | orchestrator | 2025-09-03 00:46:05.090923 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-03 00:46:05.090934 | orchestrator | Wednesday 03 September 2025 00:42:25 +0000 (0:00:00.630) 0:00:02.136 *** 2025-09-03 00:46:05.090946 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:46:05.090957 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:46:05.090968 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:46:05.090979 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:46:05.090990 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:46:05.091001 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:46:05.091058 | orchestrator | 2025-09-03 00:46:05.091074 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-03 00:46:05.091087 | orchestrator | Wednesday 03 September 2025 00:42:27 +0000 (0:00:02.054) 0:00:04.191 *** 2025-09-03 00:46:05.091101 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:46:05.091115 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:46:05.091128 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:46:05.091142 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:46:05.091156 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:46:05.091170 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:46:05.091183 | orchestrator | 2025-09-03 00:46:05.091196 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-03 00:46:05.091209 | orchestrator | Wednesday 03 September 2025 00:42:28 +0000 (0:00:00.878) 0:00:05.069 *** 2025-09-03 00:46:05.091222 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:46:05.091236 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:46:05.091250 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:46:05.091264 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:46:05.091278 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:46:05.091290 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:46:05.091304 | orchestrator | 2025-09-03 00:46:05.091318 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-03 00:46:05.091332 | orchestrator | Wednesday 03 September 2025 00:42:29 +0000 (0:00:01.082) 0:00:06.152 *** 2025-09-03 00:46:05.091345 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:46:05.091358 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:46:05.091372 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:46:05.091387 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.091398 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:46:05.091410 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:46:05.091420 | orchestrator | 2025-09-03 00:46:05.091431 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-03 00:46:05.091442 | orchestrator | Wednesday 03 September 2025 00:42:30 +0000 (0:00:00.519) 0:00:06.671 *** 2025-09-03 00:46:05.091454 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:46:05.091465 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:46:05.091476 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:46:05.091487 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.091498 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:46:05.091509 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:46:05.091520 | orchestrator | 2025-09-03 00:46:05.091531 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-03 00:46:05.091542 | orchestrator | Wednesday 03 September 2025 00:42:31 +0000 (0:00:01.012) 0:00:07.683 *** 2025-09-03 00:46:05.091553 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-03 00:46:05.091564 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-03 00:46:05.091576 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:46:05.091587 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-03 00:46:05.091598 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-03 00:46:05.091609 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:46:05.091625 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-03 00:46:05.091636 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-03 00:46:05.091648 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:46:05.091659 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-03 00:46:05.091685 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-03 00:46:05.091697 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.091708 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-03 00:46:05.091730 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-03 00:46:05.091741 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:46:05.091753 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-03 00:46:05.091764 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-03 00:46:05.091775 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:46:05.091786 | orchestrator | 2025-09-03 00:46:05.091797 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-03 00:46:05.091808 | orchestrator | Wednesday 03 September 2025 00:42:31 +0000 (0:00:00.907) 0:00:08.591 *** 2025-09-03 00:46:05.091819 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:46:05.091830 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:46:05.091842 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:46:05.091853 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.091864 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:46:05.091875 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:46:05.091886 | orchestrator | 2025-09-03 00:46:05.091897 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-03 00:46:05.091910 | orchestrator | Wednesday 03 September 2025 00:42:33 +0000 (0:00:01.457) 0:00:10.048 *** 2025-09-03 00:46:05.091921 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:46:05.091932 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:46:05.091943 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:46:05.091954 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:46:05.091965 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:46:05.092014 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:46:05.092063 | orchestrator | 2025-09-03 00:46:05.092076 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-03 00:46:05.092087 | orchestrator | Wednesday 03 September 2025 00:42:34 +0000 (0:00:00.756) 0:00:10.804 *** 2025-09-03 00:46:05.092097 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:46:05.092108 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:46:05.092119 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:46:05.092130 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:46:05.092141 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:46:05.092152 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:46:05.092163 | orchestrator | 2025-09-03 00:46:05.092174 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-03 00:46:05.092184 | orchestrator | Wednesday 03 September 2025 00:42:40 +0000 (0:00:06.086) 0:00:16.890 *** 2025-09-03 00:46:05.092196 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:46:05.092206 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:46:05.092217 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:46:05.092228 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.092239 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:46:05.092250 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:46:05.092261 | orchestrator | 2025-09-03 00:46:05.092272 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-03 00:46:05.092283 | orchestrator | Wednesday 03 September 2025 00:42:41 +0000 (0:00:01.245) 0:00:18.138 *** 2025-09-03 00:46:05.092294 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:46:05.092304 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:46:05.092315 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:46:05.092326 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.092337 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:46:05.092348 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:46:05.092359 | orchestrator | 2025-09-03 00:46:05.092370 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-03 00:46:05.092383 | orchestrator | Wednesday 03 September 2025 00:42:43 +0000 (0:00:01.741) 0:00:19.879 *** 2025-09-03 00:46:05.092394 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:46:05.092412 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:46:05.092423 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:46:05.092434 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:46:05.092445 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:46:05.092457 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:46:05.092467 | orchestrator | 2025-09-03 00:46:05.092478 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-03 00:46:05.092489 | orchestrator | Wednesday 03 September 2025 00:42:44 +0000 (0:00:00.975) 0:00:20.854 *** 2025-09-03 00:46:05.092500 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-03 00:46:05.092512 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-03 00:46:05.092523 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-03 00:46:05.092534 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-03 00:46:05.092545 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-03 00:46:05.092556 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-03 00:46:05.092567 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-03 00:46:05.092578 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-03 00:46:05.092589 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-03 00:46:05.092599 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-03 00:46:05.092610 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-03 00:46:05.092621 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-03 00:46:05.092632 | orchestrator | 2025-09-03 00:46:05.092643 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-03 00:46:05.092659 | orchestrator | Wednesday 03 September 2025 00:42:46 +0000 (0:00:02.262) 0:00:23.117 *** 2025-09-03 00:46:05.092670 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:46:05.092681 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:46:05.092692 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:46:05.092703 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:46:05.092714 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:46:05.092725 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:46:05.092736 | orchestrator | 2025-09-03 00:46:05.092755 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-03 00:46:05.092767 | orchestrator | 2025-09-03 00:46:05.092778 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-03 00:46:05.092789 | orchestrator | Wednesday 03 September 2025 00:42:48 +0000 (0:00:02.377) 0:00:25.495 *** 2025-09-03 00:46:05.092801 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:46:05.092812 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:46:05.092823 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:46:05.092834 | orchestrator | 2025-09-03 00:46:05.092845 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-03 00:46:05.092856 | orchestrator | Wednesday 03 September 2025 00:42:50 +0000 (0:00:01.302) 0:00:26.798 *** 2025-09-03 00:46:05.092867 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:46:05.092878 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:46:05.092889 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:46:05.092900 | orchestrator | 2025-09-03 00:46:05.092911 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-03 00:46:05.092922 | orchestrator | Wednesday 03 September 2025 00:42:51 +0000 (0:00:01.319) 0:00:28.118 *** 2025-09-03 00:46:05.092933 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:46:05.092944 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:46:05.092955 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:46:05.092966 | orchestrator | 2025-09-03 00:46:05.092977 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-03 00:46:05.092988 | orchestrator | Wednesday 03 September 2025 00:42:53 +0000 (0:00:01.983) 0:00:30.101 *** 2025-09-03 00:46:05.092999 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:46:05.093010 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:46:05.093079 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:46:05.093092 | orchestrator | 2025-09-03 00:46:05.093103 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-03 00:46:05.093114 | orchestrator | Wednesday 03 September 2025 00:42:55 +0000 (0:00:01.806) 0:00:31.907 *** 2025-09-03 00:46:05.093125 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.093137 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:46:05.093148 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:46:05.093159 | orchestrator | 2025-09-03 00:46:05.093170 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-03 00:46:05.093181 | orchestrator | Wednesday 03 September 2025 00:42:55 +0000 (0:00:00.398) 0:00:32.306 *** 2025-09-03 00:46:05.093191 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:46:05.093202 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:46:05.093213 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:46:05.093224 | orchestrator | 2025-09-03 00:46:05.093235 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-03 00:46:05.093246 | orchestrator | Wednesday 03 September 2025 00:42:56 +0000 (0:00:00.643) 0:00:32.949 *** 2025-09-03 00:46:05.093257 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:46:05.093268 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:46:05.093279 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:46:05.093290 | orchestrator | 2025-09-03 00:46:05.093301 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-03 00:46:05.093312 | orchestrator | Wednesday 03 September 2025 00:42:57 +0000 (0:00:01.604) 0:00:34.554 *** 2025-09-03 00:46:05.093323 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:46:05.093334 | orchestrator | 2025-09-03 00:46:05.093345 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-03 00:46:05.093356 | orchestrator | Wednesday 03 September 2025 00:42:58 +0000 (0:00:00.628) 0:00:35.183 *** 2025-09-03 00:46:05.093367 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:46:05.093378 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:46:05.093389 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:46:05.093400 | orchestrator | 2025-09-03 00:46:05.093411 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-03 00:46:05.093422 | orchestrator | Wednesday 03 September 2025 00:43:00 +0000 (0:00:02.111) 0:00:37.294 *** 2025-09-03 00:46:05.093432 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:46:05.093443 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:46:05.093455 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:46:05.093466 | orchestrator | 2025-09-03 00:46:05.093476 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-03 00:46:05.093487 | orchestrator | Wednesday 03 September 2025 00:43:01 +0000 (0:00:00.655) 0:00:37.949 *** 2025-09-03 00:46:05.093498 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:46:05.093509 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:46:05.093520 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:46:05.093531 | orchestrator | 2025-09-03 00:46:05.093542 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-03 00:46:05.093553 | orchestrator | Wednesday 03 September 2025 00:43:02 +0000 (0:00:01.459) 0:00:39.409 *** 2025-09-03 00:46:05.093564 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:46:05.093575 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:46:05.093586 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:46:05.093596 | orchestrator | 2025-09-03 00:46:05.093606 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-03 00:46:05.093616 | orchestrator | Wednesday 03 September 2025 00:43:04 +0000 (0:00:01.596) 0:00:41.006 *** 2025-09-03 00:46:05.093625 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.093635 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:46:05.093645 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:46:05.093655 | orchestrator | 2025-09-03 00:46:05.093665 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-03 00:46:05.093681 | orchestrator | Wednesday 03 September 2025 00:43:04 +0000 (0:00:00.490) 0:00:41.496 *** 2025-09-03 00:46:05.093691 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.093701 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:46:05.093715 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:46:05.093725 | orchestrator | 2025-09-03 00:46:05.093735 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-03 00:46:05.093745 | orchestrator | Wednesday 03 September 2025 00:43:05 +0000 (0:00:00.396) 0:00:41.893 *** 2025-09-03 00:46:05.093755 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:46:05.093764 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:46:05.093774 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:46:05.093784 | orchestrator | 2025-09-03 00:46:05.093800 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-03 00:46:05.093810 | orchestrator | Wednesday 03 September 2025 00:43:07 +0000 (0:00:02.279) 0:00:44.173 *** 2025-09-03 00:46:05.093821 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-03 00:46:05.093832 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-03 00:46:05.093842 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-03 00:46:05.093852 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-03 00:46:05.093862 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-03 00:46:05.093872 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-03 00:46:05.093882 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-03 00:46:05.093892 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-03 00:46:05.093902 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-03 00:46:05.093911 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-03 00:46:05.093921 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-03 00:46:05.093931 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-03 00:46:05.093941 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-03 00:46:05.093951 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-03 00:46:05.093960 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-03 00:46:05.093970 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:46:05.093981 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:46:05.093991 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:46:05.094000 | orchestrator | 2025-09-03 00:46:05.094010 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-03 00:46:05.094078 | orchestrator | Wednesday 03 September 2025 00:44:04 +0000 (0:00:56.686) 0:01:40.859 *** 2025-09-03 00:46:05.094097 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.094107 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:46:05.094117 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:46:05.094127 | orchestrator | 2025-09-03 00:46:05.094137 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-03 00:46:05.094148 | orchestrator | Wednesday 03 September 2025 00:44:04 +0000 (0:00:00.434) 0:01:41.294 *** 2025-09-03 00:46:05.094158 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:46:05.094186 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:46:05.094196 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:46:05.094206 | orchestrator | 2025-09-03 00:46:05.094216 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-03 00:46:05.094226 | orchestrator | Wednesday 03 September 2025 00:44:05 +0000 (0:00:01.032) 0:01:42.326 *** 2025-09-03 00:46:05.094235 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:46:05.094245 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:46:05.094255 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:46:05.094265 | orchestrator | 2025-09-03 00:46:05.094274 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-03 00:46:05.094284 | orchestrator | Wednesday 03 September 2025 00:44:06 +0000 (0:00:01.269) 0:01:43.595 *** 2025-09-03 00:46:05.094294 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:46:05.094304 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:46:05.094314 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:46:05.094324 | orchestrator | 2025-09-03 00:46:05.094334 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-03 00:46:05.094343 | orchestrator | Wednesday 03 September 2025 00:44:32 +0000 (0:00:25.288) 0:02:08.884 *** 2025-09-03 00:46:05.094353 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:46:05.094363 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:46:05.094373 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:46:05.094383 | orchestrator | 2025-09-03 00:46:05.094397 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-03 00:46:05.094407 | orchestrator | Wednesday 03 September 2025 00:44:32 +0000 (0:00:00.688) 0:02:09.572 *** 2025-09-03 00:46:05.094417 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:46:05.094427 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:46:05.094437 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:46:05.094447 | orchestrator | 2025-09-03 00:46:05.094464 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-03 00:46:05.094474 | orchestrator | Wednesday 03 September 2025 00:44:33 +0000 (0:00:00.632) 0:02:10.204 *** 2025-09-03 00:46:05.094484 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:46:05.094494 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:46:05.094504 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:46:05.094514 | orchestrator | 2025-09-03 00:46:05.094524 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-03 00:46:05.094534 | orchestrator | Wednesday 03 September 2025 00:44:34 +0000 (0:00:00.613) 0:02:10.817 *** 2025-09-03 00:46:05.094544 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:46:05.094553 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:46:05.094563 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:46:05.094573 | orchestrator | 2025-09-03 00:46:05.094583 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-03 00:46:05.094593 | orchestrator | Wednesday 03 September 2025 00:44:35 +0000 (0:00:00.872) 0:02:11.690 *** 2025-09-03 00:46:05.094603 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:46:05.094612 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:46:05.094622 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:46:05.094632 | orchestrator | 2025-09-03 00:46:05.094642 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-03 00:46:05.094652 | orchestrator | Wednesday 03 September 2025 00:44:35 +0000 (0:00:00.369) 0:02:12.060 *** 2025-09-03 00:46:05.094661 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:46:05.094671 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:46:05.094687 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:46:05.094697 | orchestrator | 2025-09-03 00:46:05.094707 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-03 00:46:05.094716 | orchestrator | Wednesday 03 September 2025 00:44:36 +0000 (0:00:00.753) 0:02:12.813 *** 2025-09-03 00:46:05.094726 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:46:05.094736 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:46:05.094746 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:46:05.094756 | orchestrator | 2025-09-03 00:46:05.094765 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-03 00:46:05.094775 | orchestrator | Wednesday 03 September 2025 00:44:36 +0000 (0:00:00.644) 0:02:13.458 *** 2025-09-03 00:46:05.094785 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:46:05.094795 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:46:05.094805 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:46:05.094815 | orchestrator | 2025-09-03 00:46:05.094825 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-03 00:46:05.094835 | orchestrator | Wednesday 03 September 2025 00:44:37 +0000 (0:00:01.118) 0:02:14.576 *** 2025-09-03 00:46:05.094845 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:46:05.094855 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:46:05.094865 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:46:05.094875 | orchestrator | 2025-09-03 00:46:05.094885 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-03 00:46:05.094895 | orchestrator | Wednesday 03 September 2025 00:44:38 +0000 (0:00:00.847) 0:02:15.423 *** 2025-09-03 00:46:05.094904 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.094914 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:46:05.094924 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:46:05.094934 | orchestrator | 2025-09-03 00:46:05.094944 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-03 00:46:05.094954 | orchestrator | Wednesday 03 September 2025 00:44:39 +0000 (0:00:00.270) 0:02:15.694 *** 2025-09-03 00:46:05.094964 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.094974 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:46:05.094984 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:46:05.095015 | orchestrator | 2025-09-03 00:46:05.095042 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-03 00:46:05.095052 | orchestrator | Wednesday 03 September 2025 00:44:39 +0000 (0:00:00.275) 0:02:15.969 *** 2025-09-03 00:46:05.095062 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:46:05.095072 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:46:05.095082 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:46:05.095092 | orchestrator | 2025-09-03 00:46:05.095102 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-03 00:46:05.095112 | orchestrator | Wednesday 03 September 2025 00:44:40 +0000 (0:00:00.852) 0:02:16.822 *** 2025-09-03 00:46:05.095121 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:46:05.095131 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:46:05.095141 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:46:05.095151 | orchestrator | 2025-09-03 00:46:05.095161 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-03 00:46:05.095171 | orchestrator | Wednesday 03 September 2025 00:44:40 +0000 (0:00:00.628) 0:02:17.450 *** 2025-09-03 00:46:05.095181 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-03 00:46:05.095191 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-03 00:46:05.095201 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-03 00:46:05.095211 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-03 00:46:05.095221 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-03 00:46:05.095237 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-03 00:46:05.095251 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-03 00:46:05.095262 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-03 00:46:05.095272 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-03 00:46:05.095287 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-03 00:46:05.095298 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-03 00:46:05.095308 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-03 00:46:05.095318 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-03 00:46:05.095328 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-03 00:46:05.095338 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-03 00:46:05.095348 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-03 00:46:05.095358 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-03 00:46:05.095368 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-03 00:46:05.095378 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-03 00:46:05.095388 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-03 00:46:05.095398 | orchestrator | 2025-09-03 00:46:05.095407 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-03 00:46:05.095417 | orchestrator | 2025-09-03 00:46:05.095427 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-03 00:46:05.095437 | orchestrator | Wednesday 03 September 2025 00:44:43 +0000 (0:00:02.983) 0:02:20.433 *** 2025-09-03 00:46:05.095447 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:46:05.095457 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:46:05.095467 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:46:05.095477 | orchestrator | 2025-09-03 00:46:05.095487 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-03 00:46:05.095497 | orchestrator | Wednesday 03 September 2025 00:44:44 +0000 (0:00:00.475) 0:02:20.909 *** 2025-09-03 00:46:05.095507 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:46:05.095517 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:46:05.095527 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:46:05.095537 | orchestrator | 2025-09-03 00:46:05.095547 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-03 00:46:05.095557 | orchestrator | Wednesday 03 September 2025 00:44:44 +0000 (0:00:00.640) 0:02:21.549 *** 2025-09-03 00:46:05.095567 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:46:05.095577 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:46:05.095587 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:46:05.095597 | orchestrator | 2025-09-03 00:46:05.095607 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-03 00:46:05.095616 | orchestrator | Wednesday 03 September 2025 00:44:45 +0000 (0:00:00.332) 0:02:21.881 *** 2025-09-03 00:46:05.095626 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:46:05.095636 | orchestrator | 2025-09-03 00:46:05.095646 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-03 00:46:05.095656 | orchestrator | Wednesday 03 September 2025 00:44:46 +0000 (0:00:00.834) 0:02:22.716 *** 2025-09-03 00:46:05.095677 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:46:05.095687 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:46:05.095697 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:46:05.095707 | orchestrator | 2025-09-03 00:46:05.095717 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-03 00:46:05.095727 | orchestrator | Wednesday 03 September 2025 00:44:46 +0000 (0:00:00.352) 0:02:23.069 *** 2025-09-03 00:46:05.095737 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:46:05.095747 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:46:05.095757 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:46:05.095767 | orchestrator | 2025-09-03 00:46:05.095777 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-03 00:46:05.095787 | orchestrator | Wednesday 03 September 2025 00:44:46 +0000 (0:00:00.430) 0:02:23.500 *** 2025-09-03 00:46:05.095797 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:46:05.095807 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:46:05.095817 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:46:05.095827 | orchestrator | 2025-09-03 00:46:05.095837 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-03 00:46:05.095847 | orchestrator | Wednesday 03 September 2025 00:44:47 +0000 (0:00:00.402) 0:02:23.903 *** 2025-09-03 00:46:05.095857 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:46:05.095867 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:46:05.095877 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:46:05.095887 | orchestrator | 2025-09-03 00:46:05.095896 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-03 00:46:05.095906 | orchestrator | Wednesday 03 September 2025 00:44:48 +0000 (0:00:00.798) 0:02:24.701 *** 2025-09-03 00:46:05.095916 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:46:05.095926 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:46:05.095936 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:46:05.095946 | orchestrator | 2025-09-03 00:46:05.095956 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-03 00:46:05.095966 | orchestrator | Wednesday 03 September 2025 00:44:49 +0000 (0:00:01.107) 0:02:25.809 *** 2025-09-03 00:46:05.095976 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:46:05.095986 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:46:05.095996 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:46:05.096006 | orchestrator | 2025-09-03 00:46:05.096020 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-03 00:46:05.096071 | orchestrator | Wednesday 03 September 2025 00:44:50 +0000 (0:00:01.272) 0:02:27.082 *** 2025-09-03 00:46:05.096082 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:46:05.096092 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:46:05.096102 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:46:05.096112 | orchestrator | 2025-09-03 00:46:05.096128 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-03 00:46:05.096138 | orchestrator | 2025-09-03 00:46:05.096148 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-03 00:46:05.096158 | orchestrator | Wednesday 03 September 2025 00:45:02 +0000 (0:00:12.248) 0:02:39.331 *** 2025-09-03 00:46:05.096168 | orchestrator | ok: [testbed-manager] 2025-09-03 00:46:05.096178 | orchestrator | 2025-09-03 00:46:05.096188 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-03 00:46:05.096197 | orchestrator | Wednesday 03 September 2025 00:45:03 +0000 (0:00:00.685) 0:02:40.016 *** 2025-09-03 00:46:05.096207 | orchestrator | changed: [testbed-manager] 2025-09-03 00:46:05.096217 | orchestrator | 2025-09-03 00:46:05.096227 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-03 00:46:05.096237 | orchestrator | Wednesday 03 September 2025 00:45:03 +0000 (0:00:00.381) 0:02:40.398 *** 2025-09-03 00:46:05.096247 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-03 00:46:05.096256 | orchestrator | 2025-09-03 00:46:05.096266 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-03 00:46:05.096282 | orchestrator | Wednesday 03 September 2025 00:45:04 +0000 (0:00:00.534) 0:02:40.932 *** 2025-09-03 00:46:05.096292 | orchestrator | changed: [testbed-manager] 2025-09-03 00:46:05.096302 | orchestrator | 2025-09-03 00:46:05.096312 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-03 00:46:05.096321 | orchestrator | Wednesday 03 September 2025 00:45:05 +0000 (0:00:00.748) 0:02:41.680 *** 2025-09-03 00:46:05.096331 | orchestrator | changed: [testbed-manager] 2025-09-03 00:46:05.096341 | orchestrator | 2025-09-03 00:46:05.096351 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-03 00:46:05.096360 | orchestrator | Wednesday 03 September 2025 00:45:05 +0000 (0:00:00.503) 0:02:42.184 *** 2025-09-03 00:46:05.096370 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-03 00:46:05.096380 | orchestrator | 2025-09-03 00:46:05.096390 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-03 00:46:05.096400 | orchestrator | Wednesday 03 September 2025 00:45:07 +0000 (0:00:01.511) 0:02:43.695 *** 2025-09-03 00:46:05.096409 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-03 00:46:05.096419 | orchestrator | 2025-09-03 00:46:05.096429 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-03 00:46:05.096439 | orchestrator | Wednesday 03 September 2025 00:45:08 +0000 (0:00:00.997) 0:02:44.692 *** 2025-09-03 00:46:05.096448 | orchestrator | changed: [testbed-manager] 2025-09-03 00:46:05.096458 | orchestrator | 2025-09-03 00:46:05.096468 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-03 00:46:05.096478 | orchestrator | Wednesday 03 September 2025 00:45:08 +0000 (0:00:00.382) 0:02:45.075 *** 2025-09-03 00:46:05.096488 | orchestrator | changed: [testbed-manager] 2025-09-03 00:46:05.096497 | orchestrator | 2025-09-03 00:46:05.096507 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-03 00:46:05.096517 | orchestrator | 2025-09-03 00:46:05.096527 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-03 00:46:05.096537 | orchestrator | Wednesday 03 September 2025 00:45:09 +0000 (0:00:00.814) 0:02:45.889 *** 2025-09-03 00:46:05.096547 | orchestrator | ok: [testbed-manager] 2025-09-03 00:46:05.096557 | orchestrator | 2025-09-03 00:46:05.096577 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-03 00:46:05.096588 | orchestrator | Wednesday 03 September 2025 00:45:09 +0000 (0:00:00.095) 0:02:45.984 *** 2025-09-03 00:46:05.096596 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-03 00:46:05.096604 | orchestrator | 2025-09-03 00:46:05.096612 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-03 00:46:05.096620 | orchestrator | Wednesday 03 September 2025 00:45:09 +0000 (0:00:00.167) 0:02:46.151 *** 2025-09-03 00:46:05.096628 | orchestrator | ok: [testbed-manager] 2025-09-03 00:46:05.096636 | orchestrator | 2025-09-03 00:46:05.096644 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-03 00:46:05.096652 | orchestrator | Wednesday 03 September 2025 00:45:10 +0000 (0:00:01.016) 0:02:47.168 *** 2025-09-03 00:46:05.096660 | orchestrator | ok: [testbed-manager] 2025-09-03 00:46:05.096668 | orchestrator | 2025-09-03 00:46:05.096676 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-03 00:46:05.096684 | orchestrator | Wednesday 03 September 2025 00:45:11 +0000 (0:00:01.448) 0:02:48.616 *** 2025-09-03 00:46:05.096692 | orchestrator | changed: [testbed-manager] 2025-09-03 00:46:05.096700 | orchestrator | 2025-09-03 00:46:05.096708 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-03 00:46:05.096716 | orchestrator | Wednesday 03 September 2025 00:45:12 +0000 (0:00:00.713) 0:02:49.330 *** 2025-09-03 00:46:05.096732 | orchestrator | ok: [testbed-manager] 2025-09-03 00:46:05.096740 | orchestrator | 2025-09-03 00:46:05.096748 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-03 00:46:05.096756 | orchestrator | Wednesday 03 September 2025 00:45:13 +0000 (0:00:00.495) 0:02:49.825 *** 2025-09-03 00:46:05.096770 | orchestrator | changed: [testbed-manager] 2025-09-03 00:46:05.096778 | orchestrator | 2025-09-03 00:46:05.096786 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-03 00:46:05.096794 | orchestrator | Wednesday 03 September 2025 00:45:19 +0000 (0:00:06.683) 0:02:56.509 *** 2025-09-03 00:46:05.096802 | orchestrator | changed: [testbed-manager] 2025-09-03 00:46:05.096810 | orchestrator | 2025-09-03 00:46:05.096818 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-03 00:46:05.096830 | orchestrator | Wednesday 03 September 2025 00:45:33 +0000 (0:00:13.636) 0:03:10.145 *** 2025-09-03 00:46:05.096838 | orchestrator | ok: [testbed-manager] 2025-09-03 00:46:05.096846 | orchestrator | 2025-09-03 00:46:05.096854 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-03 00:46:05.096862 | orchestrator | 2025-09-03 00:46:05.096870 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-03 00:46:05.096883 | orchestrator | Wednesday 03 September 2025 00:45:34 +0000 (0:00:00.529) 0:03:10.675 *** 2025-09-03 00:46:05.096891 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:46:05.096900 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:46:05.096908 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:46:05.096916 | orchestrator | 2025-09-03 00:46:05.096924 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-03 00:46:05.096932 | orchestrator | Wednesday 03 September 2025 00:45:34 +0000 (0:00:00.324) 0:03:10.999 *** 2025-09-03 00:46:05.096940 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.096948 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:46:05.096957 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:46:05.096965 | orchestrator | 2025-09-03 00:46:05.096973 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-03 00:46:05.096981 | orchestrator | Wednesday 03 September 2025 00:45:34 +0000 (0:00:00.277) 0:03:11.277 *** 2025-09-03 00:46:05.096989 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:46:05.096997 | orchestrator | 2025-09-03 00:46:05.097005 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-03 00:46:05.097013 | orchestrator | Wednesday 03 September 2025 00:45:35 +0000 (0:00:00.644) 0:03:11.922 *** 2025-09-03 00:46:05.097021 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097044 | orchestrator | 2025-09-03 00:46:05.097053 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-03 00:46:05.097061 | orchestrator | Wednesday 03 September 2025 00:45:35 +0000 (0:00:00.199) 0:03:12.121 *** 2025-09-03 00:46:05.097069 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097077 | orchestrator | 2025-09-03 00:46:05.097085 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-03 00:46:05.097093 | orchestrator | Wednesday 03 September 2025 00:45:35 +0000 (0:00:00.181) 0:03:12.303 *** 2025-09-03 00:46:05.097101 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097109 | orchestrator | 2025-09-03 00:46:05.097117 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-03 00:46:05.097125 | orchestrator | Wednesday 03 September 2025 00:45:35 +0000 (0:00:00.184) 0:03:12.488 *** 2025-09-03 00:46:05.097133 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097141 | orchestrator | 2025-09-03 00:46:05.097149 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-03 00:46:05.097157 | orchestrator | Wednesday 03 September 2025 00:45:36 +0000 (0:00:00.197) 0:03:12.686 *** 2025-09-03 00:46:05.097166 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097174 | orchestrator | 2025-09-03 00:46:05.097182 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-03 00:46:05.097190 | orchestrator | Wednesday 03 September 2025 00:45:36 +0000 (0:00:00.164) 0:03:12.851 *** 2025-09-03 00:46:05.097198 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097212 | orchestrator | 2025-09-03 00:46:05.097220 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-03 00:46:05.097228 | orchestrator | Wednesday 03 September 2025 00:45:36 +0000 (0:00:00.160) 0:03:13.011 *** 2025-09-03 00:46:05.097236 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097244 | orchestrator | 2025-09-03 00:46:05.097252 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-03 00:46:05.097260 | orchestrator | Wednesday 03 September 2025 00:45:36 +0000 (0:00:00.178) 0:03:13.190 *** 2025-09-03 00:46:05.097268 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097276 | orchestrator | 2025-09-03 00:46:05.097284 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-03 00:46:05.097292 | orchestrator | Wednesday 03 September 2025 00:45:36 +0000 (0:00:00.169) 0:03:13.359 *** 2025-09-03 00:46:05.097300 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097308 | orchestrator | 2025-09-03 00:46:05.097317 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-03 00:46:05.097325 | orchestrator | Wednesday 03 September 2025 00:45:37 +0000 (0:00:00.462) 0:03:13.822 *** 2025-09-03 00:46:05.097333 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-03 00:46:05.097341 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-03 00:46:05.097349 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097357 | orchestrator | 2025-09-03 00:46:05.097365 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-03 00:46:05.097373 | orchestrator | Wednesday 03 September 2025 00:45:37 +0000 (0:00:00.259) 0:03:14.082 *** 2025-09-03 00:46:05.097381 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097389 | orchestrator | 2025-09-03 00:46:05.097398 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-03 00:46:05.097406 | orchestrator | Wednesday 03 September 2025 00:45:37 +0000 (0:00:00.186) 0:03:14.268 *** 2025-09-03 00:46:05.097414 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097422 | orchestrator | 2025-09-03 00:46:05.097430 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-03 00:46:05.097438 | orchestrator | Wednesday 03 September 2025 00:45:37 +0000 (0:00:00.187) 0:03:14.455 *** 2025-09-03 00:46:05.097446 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097454 | orchestrator | 2025-09-03 00:46:05.097462 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-03 00:46:05.097470 | orchestrator | Wednesday 03 September 2025 00:45:38 +0000 (0:00:00.194) 0:03:14.650 *** 2025-09-03 00:46:05.097478 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097486 | orchestrator | 2025-09-03 00:46:05.097494 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-03 00:46:05.097503 | orchestrator | Wednesday 03 September 2025 00:45:38 +0000 (0:00:00.197) 0:03:14.848 *** 2025-09-03 00:46:05.097514 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097523 | orchestrator | 2025-09-03 00:46:05.097531 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-03 00:46:05.097539 | orchestrator | Wednesday 03 September 2025 00:45:38 +0000 (0:00:00.170) 0:03:15.018 *** 2025-09-03 00:46:05.097547 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097555 | orchestrator | 2025-09-03 00:46:05.097563 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-03 00:46:05.097576 | orchestrator | Wednesday 03 September 2025 00:45:38 +0000 (0:00:00.193) 0:03:15.212 *** 2025-09-03 00:46:05.097584 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097592 | orchestrator | 2025-09-03 00:46:05.097600 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-03 00:46:05.097608 | orchestrator | Wednesday 03 September 2025 00:45:38 +0000 (0:00:00.171) 0:03:15.384 *** 2025-09-03 00:46:05.097616 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097625 | orchestrator | 2025-09-03 00:46:05.097633 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-03 00:46:05.097645 | orchestrator | Wednesday 03 September 2025 00:45:38 +0000 (0:00:00.217) 0:03:15.601 *** 2025-09-03 00:46:05.097654 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097662 | orchestrator | 2025-09-03 00:46:05.097670 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-03 00:46:05.097678 | orchestrator | Wednesday 03 September 2025 00:45:39 +0000 (0:00:00.215) 0:03:15.817 *** 2025-09-03 00:46:05.097686 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097694 | orchestrator | 2025-09-03 00:46:05.097702 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-03 00:46:05.097710 | orchestrator | Wednesday 03 September 2025 00:45:39 +0000 (0:00:00.280) 0:03:16.098 *** 2025-09-03 00:46:05.097718 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097726 | orchestrator | 2025-09-03 00:46:05.097734 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-03 00:46:05.097742 | orchestrator | Wednesday 03 September 2025 00:45:39 +0000 (0:00:00.200) 0:03:16.298 *** 2025-09-03 00:46:05.097750 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-03 00:46:05.097758 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-03 00:46:05.097767 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-03 00:46:05.097775 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-03 00:46:05.097783 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097791 | orchestrator | 2025-09-03 00:46:05.097799 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-03 00:46:05.097807 | orchestrator | Wednesday 03 September 2025 00:45:40 +0000 (0:00:00.974) 0:03:17.273 *** 2025-09-03 00:46:05.097815 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097823 | orchestrator | 2025-09-03 00:46:05.097831 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-03 00:46:05.097839 | orchestrator | Wednesday 03 September 2025 00:45:40 +0000 (0:00:00.332) 0:03:17.606 *** 2025-09-03 00:46:05.097847 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097855 | orchestrator | 2025-09-03 00:46:05.097864 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-03 00:46:05.097872 | orchestrator | Wednesday 03 September 2025 00:45:41 +0000 (0:00:00.229) 0:03:17.836 *** 2025-09-03 00:46:05.097880 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097888 | orchestrator | 2025-09-03 00:46:05.097896 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-03 00:46:05.097904 | orchestrator | Wednesday 03 September 2025 00:45:41 +0000 (0:00:00.224) 0:03:18.060 *** 2025-09-03 00:46:05.097912 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097920 | orchestrator | 2025-09-03 00:46:05.097929 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-03 00:46:05.097937 | orchestrator | Wednesday 03 September 2025 00:45:41 +0000 (0:00:00.222) 0:03:18.283 *** 2025-09-03 00:46:05.097945 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-03 00:46:05.097953 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-03 00:46:05.097961 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.097969 | orchestrator | 2025-09-03 00:46:05.097977 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-03 00:46:05.097985 | orchestrator | Wednesday 03 September 2025 00:45:41 +0000 (0:00:00.327) 0:03:18.611 *** 2025-09-03 00:46:05.097993 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.098001 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:46:05.098009 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:46:05.098064 | orchestrator | 2025-09-03 00:46:05.098074 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-03 00:46:05.098082 | orchestrator | Wednesday 03 September 2025 00:45:42 +0000 (0:00:00.427) 0:03:19.039 *** 2025-09-03 00:46:05.098095 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:46:05.098104 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:46:05.098112 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:46:05.098120 | orchestrator | 2025-09-03 00:46:05.098128 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-03 00:46:05.098136 | orchestrator | 2025-09-03 00:46:05.098144 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-03 00:46:05.098152 | orchestrator | Wednesday 03 September 2025 00:45:44 +0000 (0:00:01.615) 0:03:20.654 *** 2025-09-03 00:46:05.098160 | orchestrator | ok: [testbed-manager] 2025-09-03 00:46:05.098168 | orchestrator | 2025-09-03 00:46:05.098176 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-03 00:46:05.098184 | orchestrator | Wednesday 03 September 2025 00:45:44 +0000 (0:00:00.195) 0:03:20.850 *** 2025-09-03 00:46:05.098192 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-03 00:46:05.098200 | orchestrator | 2025-09-03 00:46:05.098208 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-03 00:46:05.098219 | orchestrator | Wednesday 03 September 2025 00:45:44 +0000 (0:00:00.323) 0:03:21.173 *** 2025-09-03 00:46:05.098228 | orchestrator | changed: [testbed-manager] 2025-09-03 00:46:05.098235 | orchestrator | 2025-09-03 00:46:05.098244 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-03 00:46:05.098251 | orchestrator | 2025-09-03 00:46:05.098260 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-03 00:46:05.098273 | orchestrator | Wednesday 03 September 2025 00:45:49 +0000 (0:00:05.260) 0:03:26.434 *** 2025-09-03 00:46:05.098281 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:46:05.098290 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:46:05.098298 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:46:05.098306 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:46:05.098314 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:46:05.098322 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:46:05.098330 | orchestrator | 2025-09-03 00:46:05.098338 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-03 00:46:05.098346 | orchestrator | Wednesday 03 September 2025 00:45:50 +0000 (0:00:00.712) 0:03:27.147 *** 2025-09-03 00:46:05.098354 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-03 00:46:05.098362 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-03 00:46:05.098370 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-03 00:46:05.098378 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-03 00:46:05.098386 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-03 00:46:05.098394 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-03 00:46:05.098402 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-03 00:46:05.098410 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-03 00:46:05.098417 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-03 00:46:05.098425 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-03 00:46:05.098433 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-03 00:46:05.098441 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-03 00:46:05.098449 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-03 00:46:05.098457 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-03 00:46:05.098465 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-03 00:46:05.098478 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-03 00:46:05.098487 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-03 00:46:05.098494 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-03 00:46:05.098502 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-03 00:46:05.098510 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-03 00:46:05.098518 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-03 00:46:05.098526 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-03 00:46:05.098534 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-03 00:46:05.098542 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-03 00:46:05.098550 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-03 00:46:05.098558 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-03 00:46:05.098566 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-03 00:46:05.098574 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-03 00:46:05.098582 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-03 00:46:05.098589 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-03 00:46:05.098597 | orchestrator | 2025-09-03 00:46:05.098605 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-03 00:46:05.098613 | orchestrator | Wednesday 03 September 2025 00:46:02 +0000 (0:00:11.797) 0:03:38.944 *** 2025-09-03 00:46:05.098621 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:46:05.098629 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:46:05.098637 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:46:05.098645 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.098653 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:46:05.098661 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:46:05.098669 | orchestrator | 2025-09-03 00:46:05.098677 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-03 00:46:05.098685 | orchestrator | Wednesday 03 September 2025 00:46:03 +0000 (0:00:00.860) 0:03:39.805 *** 2025-09-03 00:46:05.098693 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:46:05.098701 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:46:05.098709 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:46:05.098720 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:46:05.098728 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:46:05.098736 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:46:05.098745 | orchestrator | 2025-09-03 00:46:05.098753 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:46:05.098766 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:46:05.098776 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-03 00:46:05.098785 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-03 00:46:05.098793 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-03 00:46:05.098801 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-03 00:46:05.098814 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-03 00:46:05.098822 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-03 00:46:05.098830 | orchestrator | 2025-09-03 00:46:05.098838 | orchestrator | 2025-09-03 00:46:05.098846 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:46:05.098854 | orchestrator | Wednesday 03 September 2025 00:46:03 +0000 (0:00:00.476) 0:03:40.282 *** 2025-09-03 00:46:05.098862 | orchestrator | =============================================================================== 2025-09-03 00:46:05.098870 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 56.69s 2025-09-03 00:46:05.098878 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.29s 2025-09-03 00:46:05.098886 | orchestrator | kubectl : Install required packages ------------------------------------ 13.64s 2025-09-03 00:46:05.098894 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.25s 2025-09-03 00:46:05.098902 | orchestrator | Manage labels ---------------------------------------------------------- 11.80s 2025-09-03 00:46:05.098910 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.68s 2025-09-03 00:46:05.098918 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.09s 2025-09-03 00:46:05.098926 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.26s 2025-09-03 00:46:05.098934 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.98s 2025-09-03 00:46:05.098942 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.38s 2025-09-03 00:46:05.098950 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.28s 2025-09-03 00:46:05.098958 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.26s 2025-09-03 00:46:05.098966 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.11s 2025-09-03 00:46:05.098974 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.05s 2025-09-03 00:46:05.098982 | orchestrator | k3s_server : Stop k3s --------------------------------------------------- 1.98s 2025-09-03 00:46:05.098990 | orchestrator | k3s_server : Clean previous runs of k3s-init ---------------------------- 1.81s 2025-09-03 00:46:05.098998 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.74s 2025-09-03 00:46:05.099006 | orchestrator | k3s_server_post : Remove tmp directory used for manifests --------------- 1.62s 2025-09-03 00:46:05.099014 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.60s 2025-09-03 00:46:05.099022 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.60s 2025-09-03 00:46:05.099063 | orchestrator | 2025-09-03 00:46:05 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:46:05.099072 | orchestrator | 2025-09-03 00:46:05 | INFO  | Task 2b86bbe5-2d7e-4e5e-b447-c133623ebd8b is in state STARTED 2025-09-03 00:46:05.099080 | orchestrator | 2025-09-03 00:46:05 | INFO  | Task 195e51fb-4fb3-4eb9-bc76-b85ac6fccb32 is in state STARTED 2025-09-03 00:46:05.099088 | orchestrator | 2025-09-03 00:46:05 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:46:05.099096 | orchestrator | 2025-09-03 00:46:05 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:46:05.099105 | orchestrator | 2025-09-03 00:46:05 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:46:08.132567 | orchestrator | 2025-09-03 00:46:08 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:46:08.133807 | orchestrator | 2025-09-03 00:46:08 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:46:08.134455 | orchestrator | 2025-09-03 00:46:08 | INFO  | Task 2b86bbe5-2d7e-4e5e-b447-c133623ebd8b is in state STARTED 2025-09-03 00:46:08.135207 | orchestrator | 2025-09-03 00:46:08 | INFO  | Task 195e51fb-4fb3-4eb9-bc76-b85ac6fccb32 is in state STARTED 2025-09-03 00:46:08.135987 | orchestrator | 2025-09-03 00:46:08 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:46:08.137095 | orchestrator | 2025-09-03 00:46:08 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:46:08.137118 | orchestrator | 2025-09-03 00:46:08 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:46:11.182749 | orchestrator | 2025-09-03 00:46:11 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:46:11.182868 | orchestrator | 2025-09-03 00:46:11 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:46:11.182886 | orchestrator | 2025-09-03 00:46:11 | INFO  | Task 2b86bbe5-2d7e-4e5e-b447-c133623ebd8b is in state STARTED 2025-09-03 00:46:11.182898 | orchestrator | 2025-09-03 00:46:11 | INFO  | Task 195e51fb-4fb3-4eb9-bc76-b85ac6fccb32 is in state STARTED 2025-09-03 00:46:11.182910 | orchestrator | 2025-09-03 00:46:11 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:46:11.182922 | orchestrator | 2025-09-03 00:46:11 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:46:11.182934 | orchestrator | 2025-09-03 00:46:11 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:46:14.206367 | orchestrator | 2025-09-03 00:46:14 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:46:14.206469 | orchestrator | 2025-09-03 00:46:14 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:46:14.208000 | orchestrator | 2025-09-03 00:46:14 | INFO  | Task 2b86bbe5-2d7e-4e5e-b447-c133623ebd8b is in state STARTED 2025-09-03 00:46:14.208421 | orchestrator | 2025-09-03 00:46:14 | INFO  | Task 195e51fb-4fb3-4eb9-bc76-b85ac6fccb32 is in state SUCCESS 2025-09-03 00:46:14.211295 | orchestrator | 2025-09-03 00:46:14 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:46:14.211329 | orchestrator | 2025-09-03 00:46:14 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:46:14.211342 | orchestrator | 2025-09-03 00:46:14 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:46:17.255235 | orchestrator | 2025-09-03 00:46:17 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:46:17.256866 | orchestrator | 2025-09-03 00:46:17 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:46:17.261952 | orchestrator | 2025-09-03 00:46:17 | INFO  | Task 2b86bbe5-2d7e-4e5e-b447-c133623ebd8b is in state SUCCESS 2025-09-03 00:46:17.265619 | orchestrator | 2025-09-03 00:46:17 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:46:17.267779 | orchestrator | 2025-09-03 00:46:17 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:46:17.267803 | orchestrator | 2025-09-03 00:46:17 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:46:20.303253 | orchestrator | 2025-09-03 00:46:20 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:46:20.303490 | orchestrator | 2025-09-03 00:46:20 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:46:20.305133 | orchestrator | 2025-09-03 00:46:20 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:46:20.306692 | orchestrator | 2025-09-03 00:46:20 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:46:20.306965 | orchestrator | 2025-09-03 00:46:20 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:46:23.345414 | orchestrator | 2025-09-03 00:46:23 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:46:23.349298 | orchestrator | 2025-09-03 00:46:23 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:46:23.350257 | orchestrator | 2025-09-03 00:46:23 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:46:23.351131 | orchestrator | 2025-09-03 00:46:23 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:46:23.351306 | orchestrator | 2025-09-03 00:46:23 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:46:26.388641 | orchestrator | 2025-09-03 00:46:26 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:46:26.389724 | orchestrator | 2025-09-03 00:46:26 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:46:26.391971 | orchestrator | 2025-09-03 00:46:26 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:46:26.393409 | orchestrator | 2025-09-03 00:46:26 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:46:26.393627 | orchestrator | 2025-09-03 00:46:26 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:46:29.437656 | orchestrator | 2025-09-03 00:46:29 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:46:29.438684 | orchestrator | 2025-09-03 00:46:29 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:46:29.442181 | orchestrator | 2025-09-03 00:46:29 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:46:29.443063 | orchestrator | 2025-09-03 00:46:29 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:46:29.443091 | orchestrator | 2025-09-03 00:46:29 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:46:32.476999 | orchestrator | 2025-09-03 00:46:32 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:46:32.480942 | orchestrator | 2025-09-03 00:46:32 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:46:32.482764 | orchestrator | 2025-09-03 00:46:32 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:46:32.483811 | orchestrator | 2025-09-03 00:46:32 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:46:32.484094 | orchestrator | 2025-09-03 00:46:32 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:46:35.534474 | orchestrator | 2025-09-03 00:46:35 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:46:35.535353 | orchestrator | 2025-09-03 00:46:35 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:46:35.537063 | orchestrator | 2025-09-03 00:46:35 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:46:35.539699 | orchestrator | 2025-09-03 00:46:35 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:46:35.539741 | orchestrator | 2025-09-03 00:46:35 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:46:38.590940 | orchestrator | 2025-09-03 00:46:38 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:46:38.592082 | orchestrator | 2025-09-03 00:46:38 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:46:38.593541 | orchestrator | 2025-09-03 00:46:38 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:46:38.594621 | orchestrator | 2025-09-03 00:46:38 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:46:38.594714 | orchestrator | 2025-09-03 00:46:38 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:46:41.639191 | orchestrator | 2025-09-03 00:46:41 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:46:41.641347 | orchestrator | 2025-09-03 00:46:41 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:46:41.644284 | orchestrator | 2025-09-03 00:46:41 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:46:41.646585 | orchestrator | 2025-09-03 00:46:41 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:46:41.646862 | orchestrator | 2025-09-03 00:46:41 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:46:44.687177 | orchestrator | 2025-09-03 00:46:44 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:46:44.688384 | orchestrator | 2025-09-03 00:46:44 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:46:44.690662 | orchestrator | 2025-09-03 00:46:44 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:46:44.692226 | orchestrator | 2025-09-03 00:46:44 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:46:44.692660 | orchestrator | 2025-09-03 00:46:44 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:46:47.737237 | orchestrator | 2025-09-03 00:46:47 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:46:47.738759 | orchestrator | 2025-09-03 00:46:47 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:46:47.741616 | orchestrator | 2025-09-03 00:46:47 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:46:47.743771 | orchestrator | 2025-09-03 00:46:47 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:46:47.744081 | orchestrator | 2025-09-03 00:46:47 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:46:50.789933 | orchestrator | 2025-09-03 00:46:50 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:46:50.790143 | orchestrator | 2025-09-03 00:46:50 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:46:50.791529 | orchestrator | 2025-09-03 00:46:50 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:46:50.792487 | orchestrator | 2025-09-03 00:46:50 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:46:50.792510 | orchestrator | 2025-09-03 00:46:50 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:46:53.834583 | orchestrator | 2025-09-03 00:46:53 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:46:53.836489 | orchestrator | 2025-09-03 00:46:53 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:46:53.838421 | orchestrator | 2025-09-03 00:46:53 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:46:53.840728 | orchestrator | 2025-09-03 00:46:53 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:46:53.840773 | orchestrator | 2025-09-03 00:46:53 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:46:56.896794 | orchestrator | 2025-09-03 00:46:56 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:46:56.898295 | orchestrator | 2025-09-03 00:46:56 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:46:56.901760 | orchestrator | 2025-09-03 00:46:56 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:46:56.902675 | orchestrator | 2025-09-03 00:46:56 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:46:56.902860 | orchestrator | 2025-09-03 00:46:56 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:46:59.946479 | orchestrator | 2025-09-03 00:46:59 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:46:59.947291 | orchestrator | 2025-09-03 00:46:59 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:46:59.947712 | orchestrator | 2025-09-03 00:46:59 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:46:59.949139 | orchestrator | 2025-09-03 00:46:59 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:46:59.949235 | orchestrator | 2025-09-03 00:46:59 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:47:02.987509 | orchestrator | 2025-09-03 00:47:02 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:47:02.989738 | orchestrator | 2025-09-03 00:47:02 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:47:02.992450 | orchestrator | 2025-09-03 00:47:02 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:47:02.994869 | orchestrator | 2025-09-03 00:47:02 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:47:02.994963 | orchestrator | 2025-09-03 00:47:02 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:47:06.050713 | orchestrator | 2025-09-03 00:47:06 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:47:06.050832 | orchestrator | 2025-09-03 00:47:06 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:47:06.051951 | orchestrator | 2025-09-03 00:47:06 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:47:06.052964 | orchestrator | 2025-09-03 00:47:06 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:47:06.052987 | orchestrator | 2025-09-03 00:47:06 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:47:09.083721 | orchestrator | 2025-09-03 00:47:09 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:47:09.084505 | orchestrator | 2025-09-03 00:47:09 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:47:09.085845 | orchestrator | 2025-09-03 00:47:09 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:47:09.086587 | orchestrator | 2025-09-03 00:47:09 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:47:09.086705 | orchestrator | 2025-09-03 00:47:09 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:47:12.130782 | orchestrator | 2025-09-03 00:47:12 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:47:12.131956 | orchestrator | 2025-09-03 00:47:12 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:47:12.133574 | orchestrator | 2025-09-03 00:47:12 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:47:12.137075 | orchestrator | 2025-09-03 00:47:12 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:47:12.137102 | orchestrator | 2025-09-03 00:47:12 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:47:15.170583 | orchestrator | 2025-09-03 00:47:15 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:47:15.170681 | orchestrator | 2025-09-03 00:47:15 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:47:15.171494 | orchestrator | 2025-09-03 00:47:15 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:47:15.172541 | orchestrator | 2025-09-03 00:47:15 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:47:15.172796 | orchestrator | 2025-09-03 00:47:15 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:47:18.226906 | orchestrator | 2025-09-03 00:47:18 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:47:18.227933 | orchestrator | 2025-09-03 00:47:18 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:47:18.228800 | orchestrator | 2025-09-03 00:47:18 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:47:18.230305 | orchestrator | 2025-09-03 00:47:18 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:47:18.230439 | orchestrator | 2025-09-03 00:47:18 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:47:21.255343 | orchestrator | 2025-09-03 00:47:21 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:47:21.255586 | orchestrator | 2025-09-03 00:47:21 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:47:21.256240 | orchestrator | 2025-09-03 00:47:21 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:47:21.258437 | orchestrator | 2025-09-03 00:47:21 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:47:21.258464 | orchestrator | 2025-09-03 00:47:21 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:47:24.295128 | orchestrator | 2025-09-03 00:47:24 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:47:24.296227 | orchestrator | 2025-09-03 00:47:24 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:47:24.300458 | orchestrator | 2025-09-03 00:47:24 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:47:24.301188 | orchestrator | 2025-09-03 00:47:24 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:47:24.301223 | orchestrator | 2025-09-03 00:47:24 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:47:27.328387 | orchestrator | 2025-09-03 00:47:27 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:47:27.328657 | orchestrator | 2025-09-03 00:47:27 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state STARTED 2025-09-03 00:47:27.329233 | orchestrator | 2025-09-03 00:47:27 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:47:27.330105 | orchestrator | 2025-09-03 00:47:27 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:47:27.330130 | orchestrator | 2025-09-03 00:47:27 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:47:30.363638 | orchestrator | 2025-09-03 00:47:30 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:47:30.365132 | orchestrator | 2025-09-03 00:47:30 | INFO  | Task 8e4f7254-4b68-4c82-94f6-5f083843efa7 is in state SUCCESS 2025-09-03 00:47:30.366892 | orchestrator | 2025-09-03 00:47:30.366931 | orchestrator | 2025-09-03 00:47:30.366943 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-03 00:47:30.366956 | orchestrator | 2025-09-03 00:47:30.366967 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-03 00:47:30.366979 | orchestrator | Wednesday 03 September 2025 00:46:08 +0000 (0:00:00.186) 0:00:00.186 *** 2025-09-03 00:47:30.366991 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-03 00:47:30.367003 | orchestrator | 2025-09-03 00:47:30.367035 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-03 00:47:30.367046 | orchestrator | Wednesday 03 September 2025 00:46:09 +0000 (0:00:00.857) 0:00:01.044 *** 2025-09-03 00:47:30.367058 | orchestrator | changed: [testbed-manager] 2025-09-03 00:47:30.367072 | orchestrator | 2025-09-03 00:47:30.367083 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-03 00:47:30.367095 | orchestrator | Wednesday 03 September 2025 00:46:10 +0000 (0:00:01.371) 0:00:02.415 *** 2025-09-03 00:47:30.367106 | orchestrator | changed: [testbed-manager] 2025-09-03 00:47:30.367117 | orchestrator | 2025-09-03 00:47:30.367128 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:47:30.367139 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:47:30.367152 | orchestrator | 2025-09-03 00:47:30.367163 | orchestrator | 2025-09-03 00:47:30.367175 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:47:30.367186 | orchestrator | Wednesday 03 September 2025 00:46:11 +0000 (0:00:00.474) 0:00:02.889 *** 2025-09-03 00:47:30.367197 | orchestrator | =============================================================================== 2025-09-03 00:47:30.367208 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.37s 2025-09-03 00:47:30.367219 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.86s 2025-09-03 00:47:30.367230 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.47s 2025-09-03 00:47:30.367241 | orchestrator | 2025-09-03 00:47:30.367252 | orchestrator | 2025-09-03 00:47:30.367263 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-03 00:47:30.367274 | orchestrator | 2025-09-03 00:47:30.367285 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-03 00:47:30.367296 | orchestrator | Wednesday 03 September 2025 00:46:08 +0000 (0:00:00.169) 0:00:00.169 *** 2025-09-03 00:47:30.367307 | orchestrator | ok: [testbed-manager] 2025-09-03 00:47:30.367320 | orchestrator | 2025-09-03 00:47:30.367365 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-03 00:47:30.367379 | orchestrator | Wednesday 03 September 2025 00:46:08 +0000 (0:00:00.507) 0:00:00.677 *** 2025-09-03 00:47:30.367390 | orchestrator | ok: [testbed-manager] 2025-09-03 00:47:30.367401 | orchestrator | 2025-09-03 00:47:30.367412 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-03 00:47:30.367423 | orchestrator | Wednesday 03 September 2025 00:46:08 +0000 (0:00:00.484) 0:00:01.162 *** 2025-09-03 00:47:30.367434 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-03 00:47:30.367445 | orchestrator | 2025-09-03 00:47:30.367456 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-03 00:47:30.367483 | orchestrator | Wednesday 03 September 2025 00:46:09 +0000 (0:00:00.664) 0:00:01.826 *** 2025-09-03 00:47:30.367497 | orchestrator | changed: [testbed-manager] 2025-09-03 00:47:30.367510 | orchestrator | 2025-09-03 00:47:30.367522 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-03 00:47:30.367535 | orchestrator | Wednesday 03 September 2025 00:46:11 +0000 (0:00:01.819) 0:00:03.646 *** 2025-09-03 00:47:30.367562 | orchestrator | changed: [testbed-manager] 2025-09-03 00:47:30.367574 | orchestrator | 2025-09-03 00:47:30.367587 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-03 00:47:30.367600 | orchestrator | Wednesday 03 September 2025 00:46:12 +0000 (0:00:00.651) 0:00:04.297 *** 2025-09-03 00:47:30.367613 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-03 00:47:30.367625 | orchestrator | 2025-09-03 00:47:30.367638 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-03 00:47:30.367650 | orchestrator | Wednesday 03 September 2025 00:46:13 +0000 (0:00:01.450) 0:00:05.747 *** 2025-09-03 00:47:30.367664 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-03 00:47:30.367676 | orchestrator | 2025-09-03 00:47:30.367689 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-03 00:47:30.367702 | orchestrator | Wednesday 03 September 2025 00:46:14 +0000 (0:00:00.722) 0:00:06.469 *** 2025-09-03 00:47:30.367714 | orchestrator | ok: [testbed-manager] 2025-09-03 00:47:30.367727 | orchestrator | 2025-09-03 00:47:30.367739 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-03 00:47:30.367752 | orchestrator | Wednesday 03 September 2025 00:46:14 +0000 (0:00:00.352) 0:00:06.822 *** 2025-09-03 00:47:30.367765 | orchestrator | ok: [testbed-manager] 2025-09-03 00:47:30.367777 | orchestrator | 2025-09-03 00:47:30.367790 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:47:30.367805 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:47:30.367817 | orchestrator | 2025-09-03 00:47:30.367830 | orchestrator | 2025-09-03 00:47:30.367844 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:47:30.367857 | orchestrator | Wednesday 03 September 2025 00:46:15 +0000 (0:00:00.457) 0:00:07.280 *** 2025-09-03 00:47:30.367868 | orchestrator | =============================================================================== 2025-09-03 00:47:30.367879 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.82s 2025-09-03 00:47:30.367890 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.45s 2025-09-03 00:47:30.367901 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.72s 2025-09-03 00:47:30.367924 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.66s 2025-09-03 00:47:30.367936 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.65s 2025-09-03 00:47:30.367947 | orchestrator | Get home directory of operator user ------------------------------------- 0.51s 2025-09-03 00:47:30.367958 | orchestrator | Create .kube directory -------------------------------------------------- 0.48s 2025-09-03 00:47:30.367969 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.46s 2025-09-03 00:47:30.367980 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.35s 2025-09-03 00:47:30.367991 | orchestrator | 2025-09-03 00:47:30.368002 | orchestrator | 2025-09-03 00:47:30.368028 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-03 00:47:30.368040 | orchestrator | 2025-09-03 00:47:30.368050 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-03 00:47:30.368061 | orchestrator | Wednesday 03 September 2025 00:45:11 +0000 (0:00:00.264) 0:00:00.264 *** 2025-09-03 00:47:30.368072 | orchestrator | ok: [localhost] => { 2025-09-03 00:47:30.368084 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-03 00:47:30.368096 | orchestrator | } 2025-09-03 00:47:30.368107 | orchestrator | 2025-09-03 00:47:30.368118 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-03 00:47:30.368129 | orchestrator | Wednesday 03 September 2025 00:45:11 +0000 (0:00:00.078) 0:00:00.343 *** 2025-09-03 00:47:30.368141 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 1, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-03 00:47:30.368161 | orchestrator | ...ignoring 2025-09-03 00:47:30.368172 | orchestrator | 2025-09-03 00:47:30.368184 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-03 00:47:30.368194 | orchestrator | Wednesday 03 September 2025 00:45:14 +0000 (0:00:03.200) 0:00:03.543 *** 2025-09-03 00:47:30.368205 | orchestrator | skipping: [localhost] 2025-09-03 00:47:30.368216 | orchestrator | 2025-09-03 00:47:30.368227 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-03 00:47:30.368237 | orchestrator | Wednesday 03 September 2025 00:45:14 +0000 (0:00:00.130) 0:00:03.674 *** 2025-09-03 00:47:30.368248 | orchestrator | ok: [localhost] 2025-09-03 00:47:30.368259 | orchestrator | 2025-09-03 00:47:30.368270 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 00:47:30.368281 | orchestrator | 2025-09-03 00:47:30.368291 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 00:47:30.368302 | orchestrator | Wednesday 03 September 2025 00:45:14 +0000 (0:00:00.196) 0:00:03.870 *** 2025-09-03 00:47:30.368313 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:47:30.368324 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:47:30.368335 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:47:30.368345 | orchestrator | 2025-09-03 00:47:30.368356 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 00:47:30.368367 | orchestrator | Wednesday 03 September 2025 00:45:15 +0000 (0:00:00.322) 0:00:04.193 *** 2025-09-03 00:47:30.368378 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-03 00:47:30.368389 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-03 00:47:30.368405 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-03 00:47:30.368416 | orchestrator | 2025-09-03 00:47:30.368427 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-03 00:47:30.368438 | orchestrator | 2025-09-03 00:47:30.368449 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-03 00:47:30.368459 | orchestrator | Wednesday 03 September 2025 00:45:15 +0000 (0:00:00.756) 0:00:04.949 *** 2025-09-03 00:47:30.368470 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:47:30.368481 | orchestrator | 2025-09-03 00:47:30.368492 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-03 00:47:30.368503 | orchestrator | Wednesday 03 September 2025 00:45:16 +0000 (0:00:00.587) 0:00:05.536 *** 2025-09-03 00:47:30.368514 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:47:30.368524 | orchestrator | 2025-09-03 00:47:30.368535 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-03 00:47:30.368546 | orchestrator | Wednesday 03 September 2025 00:45:17 +0000 (0:00:01.090) 0:00:06.627 *** 2025-09-03 00:47:30.368557 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:47:30.368567 | orchestrator | 2025-09-03 00:47:30.368578 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-03 00:47:30.368589 | orchestrator | Wednesday 03 September 2025 00:45:18 +0000 (0:00:00.548) 0:00:07.176 *** 2025-09-03 00:47:30.368600 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:47:30.368611 | orchestrator | 2025-09-03 00:47:30.368621 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-03 00:47:30.368632 | orchestrator | Wednesday 03 September 2025 00:45:19 +0000 (0:00:00.872) 0:00:08.049 *** 2025-09-03 00:47:30.368643 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:47:30.368653 | orchestrator | 2025-09-03 00:47:30.368664 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-03 00:47:30.368675 | orchestrator | Wednesday 03 September 2025 00:45:19 +0000 (0:00:00.745) 0:00:08.794 *** 2025-09-03 00:47:30.368685 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:47:30.368696 | orchestrator | 2025-09-03 00:47:30.368707 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-03 00:47:30.368725 | orchestrator | Wednesday 03 September 2025 00:45:20 +0000 (0:00:01.082) 0:00:09.877 *** 2025-09-03 00:47:30.368736 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:47:30.368746 | orchestrator | 2025-09-03 00:47:30.368757 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-03 00:47:30.368774 | orchestrator | Wednesday 03 September 2025 00:45:22 +0000 (0:00:01.647) 0:00:11.525 *** 2025-09-03 00:47:30.368786 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:47:30.368797 | orchestrator | 2025-09-03 00:47:30.368808 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-03 00:47:30.368819 | orchestrator | Wednesday 03 September 2025 00:45:23 +0000 (0:00:00.824) 0:00:12.350 *** 2025-09-03 00:47:30.368859 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:47:30.368871 | orchestrator | 2025-09-03 00:47:30.368882 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-03 00:47:30.368894 | orchestrator | Wednesday 03 September 2025 00:45:24 +0000 (0:00:00.971) 0:00:13.321 *** 2025-09-03 00:47:30.368905 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:47:30.368915 | orchestrator | 2025-09-03 00:47:30.368926 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-03 00:47:30.368937 | orchestrator | Wednesday 03 September 2025 00:45:24 +0000 (0:00:00.336) 0:00:13.657 *** 2025-09-03 00:47:30.368955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-03 00:47:30.368979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-03 00:47:30.368993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-03 00:47:30.369030 | orchestrator | 2025-09-03 00:47:30.369042 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-03 00:47:30.369082 | orchestrator | Wednesday 03 September 2025 00:45:25 +0000 (0:00:00.786) 0:00:14.444 *** 2025-09-03 00:47:30.369103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-03 00:47:30.369117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-03 00:47:30.369135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-03 00:47:30.369163 | orchestrator | 2025-09-03 00:47:30.369174 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-03 00:47:30.369185 | orchestrator | Wednesday 03 September 2025 00:45:27 +0000 (0:00:02.155) 0:00:16.599 *** 2025-09-03 00:47:30.369196 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-03 00:47:30.369207 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-03 00:47:30.369219 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-03 00:47:30.369230 | orchestrator | 2025-09-03 00:47:30.369240 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-03 00:47:30.369251 | orchestrator | Wednesday 03 September 2025 00:45:29 +0000 (0:00:02.059) 0:00:18.659 *** 2025-09-03 00:47:30.369262 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-03 00:47:30.369273 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-03 00:47:30.369284 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-03 00:47:30.369295 | orchestrator | 2025-09-03 00:47:30.369306 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-03 00:47:30.369323 | orchestrator | Wednesday 03 September 2025 00:45:31 +0000 (0:00:02.173) 0:00:20.833 *** 2025-09-03 00:47:30.369334 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-03 00:47:30.369345 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-03 00:47:30.369356 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-03 00:47:30.369367 | orchestrator | 2025-09-03 00:47:30.369378 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-03 00:47:30.369389 | orchestrator | Wednesday 03 September 2025 00:45:33 +0000 (0:00:01.467) 0:00:22.300 *** 2025-09-03 00:47:30.369400 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-03 00:47:30.369411 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-03 00:47:30.369422 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-03 00:47:30.369434 | orchestrator | 2025-09-03 00:47:30.369444 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-03 00:47:30.369455 | orchestrator | Wednesday 03 September 2025 00:45:35 +0000 (0:00:01.856) 0:00:24.156 *** 2025-09-03 00:47:30.369466 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-03 00:47:30.369477 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-03 00:47:30.369488 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-03 00:47:30.369500 | orchestrator | 2025-09-03 00:47:30.369511 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-03 00:47:30.369522 | orchestrator | Wednesday 03 September 2025 00:45:36 +0000 (0:00:01.885) 0:00:26.042 *** 2025-09-03 00:47:30.369532 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-03 00:47:30.369544 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-03 00:47:30.369555 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-03 00:47:30.369565 | orchestrator | 2025-09-03 00:47:30.369576 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-03 00:47:30.369587 | orchestrator | Wednesday 03 September 2025 00:45:38 +0000 (0:00:01.352) 0:00:27.395 *** 2025-09-03 00:47:30.369605 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:47:30.369616 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:47:30.369627 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:47:30.369638 | orchestrator | 2025-09-03 00:47:30.369649 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-03 00:47:30.369659 | orchestrator | Wednesday 03 September 2025 00:45:38 +0000 (0:00:00.505) 0:00:27.901 *** 2025-09-03 00:47:30.369676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-03 00:47:30.369695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-03 00:47:30.369709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-03 00:47:30.369720 | orchestrator | 2025-09-03 00:47:30.369732 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-03 00:47:30.369743 | orchestrator | Wednesday 03 September 2025 00:45:40 +0000 (0:00:02.097) 0:00:29.998 *** 2025-09-03 00:47:30.369753 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:47:30.369764 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:47:30.369782 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:47:30.369793 | orchestrator | 2025-09-03 00:47:30.369803 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-03 00:47:30.369814 | orchestrator | Wednesday 03 September 2025 00:45:41 +0000 (0:00:00.915) 0:00:30.913 *** 2025-09-03 00:47:30.369825 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:47:30.369836 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:47:30.369847 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:47:30.369858 | orchestrator | 2025-09-03 00:47:30.369868 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-03 00:47:30.369879 | orchestrator | Wednesday 03 September 2025 00:45:49 +0000 (0:00:07.852) 0:00:38.766 *** 2025-09-03 00:47:30.369890 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:47:30.369901 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:47:30.369912 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:47:30.369923 | orchestrator | 2025-09-03 00:47:30.369934 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-03 00:47:30.369945 | orchestrator | 2025-09-03 00:47:30.369955 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-03 00:47:30.369966 | orchestrator | Wednesday 03 September 2025 00:45:50 +0000 (0:00:00.463) 0:00:39.229 *** 2025-09-03 00:47:30.369981 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:47:30.369993 | orchestrator | 2025-09-03 00:47:30.370003 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-03 00:47:30.370093 | orchestrator | Wednesday 03 September 2025 00:45:50 +0000 (0:00:00.646) 0:00:39.875 *** 2025-09-03 00:47:30.370108 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:47:30.370119 | orchestrator | 2025-09-03 00:47:30.370130 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-03 00:47:30.370141 | orchestrator | Wednesday 03 September 2025 00:45:51 +0000 (0:00:00.188) 0:00:40.064 *** 2025-09-03 00:47:30.370153 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:47:30.370164 | orchestrator | 2025-09-03 00:47:30.370175 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-03 00:47:30.370186 | orchestrator | Wednesday 03 September 2025 00:45:53 +0000 (0:00:02.048) 0:00:42.113 *** 2025-09-03 00:47:30.370197 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:47:30.370208 | orchestrator | 2025-09-03 00:47:30.370219 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-03 00:47:30.370230 | orchestrator | 2025-09-03 00:47:30.370241 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-03 00:47:30.370252 | orchestrator | Wednesday 03 September 2025 00:46:47 +0000 (0:00:54.674) 0:01:36.788 *** 2025-09-03 00:47:30.370263 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:47:30.370274 | orchestrator | 2025-09-03 00:47:30.370285 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-03 00:47:30.370296 | orchestrator | Wednesday 03 September 2025 00:46:48 +0000 (0:00:00.603) 0:01:37.391 *** 2025-09-03 00:47:30.370307 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:47:30.370318 | orchestrator | 2025-09-03 00:47:30.370329 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-03 00:47:30.370340 | orchestrator | Wednesday 03 September 2025 00:46:48 +0000 (0:00:00.230) 0:01:37.622 *** 2025-09-03 00:47:30.370352 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:47:30.370362 | orchestrator | 2025-09-03 00:47:30.370374 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-03 00:47:30.370385 | orchestrator | Wednesday 03 September 2025 00:46:50 +0000 (0:00:02.008) 0:01:39.630 *** 2025-09-03 00:47:30.370396 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:47:30.370407 | orchestrator | 2025-09-03 00:47:30.370418 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-03 00:47:30.370429 | orchestrator | 2025-09-03 00:47:30.370440 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-03 00:47:30.370451 | orchestrator | Wednesday 03 September 2025 00:47:07 +0000 (0:00:16.762) 0:01:56.393 *** 2025-09-03 00:47:30.370470 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:47:30.370481 | orchestrator | 2025-09-03 00:47:30.370500 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-03 00:47:30.370512 | orchestrator | Wednesday 03 September 2025 00:47:07 +0000 (0:00:00.608) 0:01:57.001 *** 2025-09-03 00:47:30.370523 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:47:30.370534 | orchestrator | 2025-09-03 00:47:30.370545 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-03 00:47:30.370556 | orchestrator | Wednesday 03 September 2025 00:47:08 +0000 (0:00:00.203) 0:01:57.205 *** 2025-09-03 00:47:30.370567 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:47:30.370578 | orchestrator | 2025-09-03 00:47:30.370589 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-03 00:47:30.370600 | orchestrator | Wednesday 03 September 2025 00:47:09 +0000 (0:00:01.589) 0:01:58.795 *** 2025-09-03 00:47:30.370611 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:47:30.370622 | orchestrator | 2025-09-03 00:47:30.370633 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-03 00:47:30.370644 | orchestrator | 2025-09-03 00:47:30.370655 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-03 00:47:30.370666 | orchestrator | Wednesday 03 September 2025 00:47:25 +0000 (0:00:15.709) 0:02:14.504 *** 2025-09-03 00:47:30.370691 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:47:30.370702 | orchestrator | 2025-09-03 00:47:30.370713 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-03 00:47:30.370724 | orchestrator | Wednesday 03 September 2025 00:47:26 +0000 (0:00:00.978) 0:02:15.483 *** 2025-09-03 00:47:30.370735 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-03 00:47:30.370746 | orchestrator | enable_outward_rabbitmq_True 2025-09-03 00:47:30.370757 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-03 00:47:30.370768 | orchestrator | outward_rabbitmq_restart 2025-09-03 00:47:30.370779 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:47:30.370791 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:47:30.370802 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:47:30.370812 | orchestrator | 2025-09-03 00:47:30.370823 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-03 00:47:30.370834 | orchestrator | skipping: no hosts matched 2025-09-03 00:47:30.370845 | orchestrator | 2025-09-03 00:47:30.370856 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-03 00:47:30.370867 | orchestrator | skipping: no hosts matched 2025-09-03 00:47:30.370878 | orchestrator | 2025-09-03 00:47:30.370889 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-03 00:47:30.370899 | orchestrator | skipping: no hosts matched 2025-09-03 00:47:30.370910 | orchestrator | 2025-09-03 00:47:30.370921 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:47:30.370933 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-03 00:47:30.370945 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-03 00:47:30.370962 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:47:30.370973 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:47:30.370984 | orchestrator | 2025-09-03 00:47:30.370995 | orchestrator | 2025-09-03 00:47:30.371006 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:47:30.371033 | orchestrator | Wednesday 03 September 2025 00:47:29 +0000 (0:00:02.773) 0:02:18.257 *** 2025-09-03 00:47:30.371052 | orchestrator | =============================================================================== 2025-09-03 00:47:30.371063 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 87.15s 2025-09-03 00:47:30.371074 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.85s 2025-09-03 00:47:30.371085 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.65s 2025-09-03 00:47:30.371096 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.20s 2025-09-03 00:47:30.371106 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.77s 2025-09-03 00:47:30.371117 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.17s 2025-09-03 00:47:30.371128 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.16s 2025-09-03 00:47:30.371139 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.10s 2025-09-03 00:47:30.371149 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.06s 2025-09-03 00:47:30.371160 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.89s 2025-09-03 00:47:30.371171 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.86s 2025-09-03 00:47:30.371182 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.86s 2025-09-03 00:47:30.371192 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.65s 2025-09-03 00:47:30.371203 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.47s 2025-09-03 00:47:30.371214 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.35s 2025-09-03 00:47:30.371225 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.09s 2025-09-03 00:47:30.371236 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.08s 2025-09-03 00:47:30.371254 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 0.98s 2025-09-03 00:47:30.371265 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 0.97s 2025-09-03 00:47:30.371276 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.92s 2025-09-03 00:47:30.371287 | orchestrator | 2025-09-03 00:47:30 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:47:30.371298 | orchestrator | 2025-09-03 00:47:30 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:47:30.371310 | orchestrator | 2025-09-03 00:47:30 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:47:33.410939 | orchestrator | 2025-09-03 00:47:33 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:47:33.411453 | orchestrator | 2025-09-03 00:47:33 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:47:33.412832 | orchestrator | 2025-09-03 00:47:33 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:47:33.412849 | orchestrator | 2025-09-03 00:47:33 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:47:36.446196 | orchestrator | 2025-09-03 00:47:36 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:47:36.447366 | orchestrator | 2025-09-03 00:47:36 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:47:36.449459 | orchestrator | 2025-09-03 00:47:36 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:47:36.449564 | orchestrator | 2025-09-03 00:47:36 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:47:39.495780 | orchestrator | 2025-09-03 00:47:39 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:47:39.498452 | orchestrator | 2025-09-03 00:47:39 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:47:39.499860 | orchestrator | 2025-09-03 00:47:39 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:47:39.499899 | orchestrator | 2025-09-03 00:47:39 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:47:42.535223 | orchestrator | 2025-09-03 00:47:42 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:47:42.535928 | orchestrator | 2025-09-03 00:47:42 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:47:42.537212 | orchestrator | 2025-09-03 00:47:42 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:47:42.537534 | orchestrator | 2025-09-03 00:47:42 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:47:45.563992 | orchestrator | 2025-09-03 00:47:45 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:47:45.565934 | orchestrator | 2025-09-03 00:47:45 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:47:45.566592 | orchestrator | 2025-09-03 00:47:45 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:47:45.566618 | orchestrator | 2025-09-03 00:47:45 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:47:48.600375 | orchestrator | 2025-09-03 00:47:48 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:47:48.600492 | orchestrator | 2025-09-03 00:47:48 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:47:48.601105 | orchestrator | 2025-09-03 00:47:48 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:47:48.601130 | orchestrator | 2025-09-03 00:47:48 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:47:51.637654 | orchestrator | 2025-09-03 00:47:51 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:47:51.640877 | orchestrator | 2025-09-03 00:47:51 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:47:51.642446 | orchestrator | 2025-09-03 00:47:51 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:47:51.642924 | orchestrator | 2025-09-03 00:47:51 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:47:54.681983 | orchestrator | 2025-09-03 00:47:54 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:47:54.683608 | orchestrator | 2025-09-03 00:47:54 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:47:54.685700 | orchestrator | 2025-09-03 00:47:54 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:47:54.685725 | orchestrator | 2025-09-03 00:47:54 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:47:57.737124 | orchestrator | 2025-09-03 00:47:57 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:47:57.739001 | orchestrator | 2025-09-03 00:47:57 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:47:57.740345 | orchestrator | 2025-09-03 00:47:57 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:47:57.740370 | orchestrator | 2025-09-03 00:47:57 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:48:00.784739 | orchestrator | 2025-09-03 00:48:00 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:48:00.785496 | orchestrator | 2025-09-03 00:48:00 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:48:00.787312 | orchestrator | 2025-09-03 00:48:00 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:48:00.787340 | orchestrator | 2025-09-03 00:48:00 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:48:03.862224 | orchestrator | 2025-09-03 00:48:03 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:48:03.863868 | orchestrator | 2025-09-03 00:48:03 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:48:03.865912 | orchestrator | 2025-09-03 00:48:03 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:48:03.866179 | orchestrator | 2025-09-03 00:48:03 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:48:06.908754 | orchestrator | 2025-09-03 00:48:06 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:48:06.908870 | orchestrator | 2025-09-03 00:48:06 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:48:06.910802 | orchestrator | 2025-09-03 00:48:06 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:48:06.910842 | orchestrator | 2025-09-03 00:48:06 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:48:09.956803 | orchestrator | 2025-09-03 00:48:09 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:48:09.957898 | orchestrator | 2025-09-03 00:48:09 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:48:09.959361 | orchestrator | 2025-09-03 00:48:09 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:48:09.959415 | orchestrator | 2025-09-03 00:48:09 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:48:13.007256 | orchestrator | 2025-09-03 00:48:13 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:48:13.007376 | orchestrator | 2025-09-03 00:48:13 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:48:13.007393 | orchestrator | 2025-09-03 00:48:13 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:48:13.007411 | orchestrator | 2025-09-03 00:48:13 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:48:16.069917 | orchestrator | 2025-09-03 00:48:16 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:48:16.070942 | orchestrator | 2025-09-03 00:48:16 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:48:16.071835 | orchestrator | 2025-09-03 00:48:16 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:48:16.071870 | orchestrator | 2025-09-03 00:48:16 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:48:19.107513 | orchestrator | 2025-09-03 00:48:19 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:48:19.108330 | orchestrator | 2025-09-03 00:48:19 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:48:19.108366 | orchestrator | 2025-09-03 00:48:19 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state STARTED 2025-09-03 00:48:19.108378 | orchestrator | 2025-09-03 00:48:19 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:48:22.136346 | orchestrator | 2025-09-03 00:48:22 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:48:22.136444 | orchestrator | 2025-09-03 00:48:22 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:48:22.138720 | orchestrator | 2025-09-03 00:48:22 | INFO  | Task 0a47554e-e682-4d70-9aa6-6736a719fac4 is in state SUCCESS 2025-09-03 00:48:22.140801 | orchestrator | 2025-09-03 00:48:22.140837 | orchestrator | 2025-09-03 00:48:22.140848 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 00:48:22.140859 | orchestrator | 2025-09-03 00:48:22.140869 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 00:48:22.140879 | orchestrator | Wednesday 03 September 2025 00:45:58 +0000 (0:00:00.309) 0:00:00.310 *** 2025-09-03 00:48:22.140889 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:48:22.140902 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:48:22.140912 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:48:22.140922 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:48:22.140932 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:48:22.140942 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:48:22.140952 | orchestrator | 2025-09-03 00:48:22.140962 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 00:48:22.140971 | orchestrator | Wednesday 03 September 2025 00:46:00 +0000 (0:00:01.502) 0:00:01.812 *** 2025-09-03 00:48:22.140981 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-03 00:48:22.140991 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-03 00:48:22.141032 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-03 00:48:22.141042 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-03 00:48:22.141052 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-03 00:48:22.141062 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-03 00:48:22.141071 | orchestrator | 2025-09-03 00:48:22.141081 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-03 00:48:22.141091 | orchestrator | 2025-09-03 00:48:22.141101 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-03 00:48:22.141111 | orchestrator | Wednesday 03 September 2025 00:46:01 +0000 (0:00:01.661) 0:00:03.474 *** 2025-09-03 00:48:22.141122 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:48:22.141133 | orchestrator | 2025-09-03 00:48:22.141143 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-03 00:48:22.141153 | orchestrator | Wednesday 03 September 2025 00:46:02 +0000 (0:00:01.267) 0:00:04.742 *** 2025-09-03 00:48:22.141165 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141193 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141204 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141313 | orchestrator | 2025-09-03 00:48:22.141334 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-03 00:48:22.141345 | orchestrator | Wednesday 03 September 2025 00:46:04 +0000 (0:00:01.656) 0:00:06.398 *** 2025-09-03 00:48:22.141355 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141366 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141440 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141488 | orchestrator | 2025-09-03 00:48:22.141499 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-03 00:48:22.141511 | orchestrator | Wednesday 03 September 2025 00:46:06 +0000 (0:00:01.910) 0:00:08.309 *** 2025-09-03 00:48:22.141596 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141651 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141686 | orchestrator | 2025-09-03 00:48:22.141698 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-03 00:48:22.141710 | orchestrator | Wednesday 03 September 2025 00:46:07 +0000 (0:00:00.939) 0:00:09.248 *** 2025-09-03 00:48:22.141721 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141737 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141756 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141798 | orchestrator | 2025-09-03 00:48:22.141812 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-03 00:48:22.141823 | orchestrator | Wednesday 03 September 2025 00:46:08 +0000 (0:00:01.537) 0:00:10.786 *** 2025-09-03 00:48:22.141833 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141853 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141864 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.141904 | orchestrator | 2025-09-03 00:48:22.141914 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-03 00:48:22.141924 | orchestrator | Wednesday 03 September 2025 00:46:10 +0000 (0:00:01.173) 0:00:11.959 *** 2025-09-03 00:48:22.141934 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:48:22.141945 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:48:22.141955 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:48:22.141964 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:48:22.141974 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:48:22.141984 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:48:22.141994 | orchestrator | 2025-09-03 00:48:22.142072 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-03 00:48:22.142083 | orchestrator | Wednesday 03 September 2025 00:46:12 +0000 (0:00:02.642) 0:00:14.601 *** 2025-09-03 00:48:22.142093 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-03 00:48:22.142103 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-03 00:48:22.142112 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-03 00:48:22.142122 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-03 00:48:22.142132 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-03 00:48:22.142141 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-03 00:48:22.142151 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-03 00:48:22.142161 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-03 00:48:22.142177 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-03 00:48:22.142187 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-03 00:48:22.142197 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-03 00:48:22.142206 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-03 00:48:22.142216 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-03 00:48:22.142228 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-03 00:48:22.142238 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-03 00:48:22.142248 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-03 00:48:22.142258 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-03 00:48:22.142267 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-03 00:48:22.142284 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-03 00:48:22.142295 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-03 00:48:22.142305 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-03 00:48:22.142315 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-03 00:48:22.142325 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-03 00:48:22.142334 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-03 00:48:22.142344 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-03 00:48:22.142354 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-03 00:48:22.142363 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-03 00:48:22.142373 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-03 00:48:22.142387 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-03 00:48:22.142397 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-03 00:48:22.142407 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-03 00:48:22.142417 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-03 00:48:22.142427 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-03 00:48:22.142437 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-03 00:48:22.142446 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-03 00:48:22.142456 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-03 00:48:22.142466 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-03 00:48:22.142476 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-03 00:48:22.142486 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-03 00:48:22.142496 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-03 00:48:22.142506 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-03 00:48:22.142516 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-03 00:48:22.142525 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-03 00:48:22.142536 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-03 00:48:22.142550 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-03 00:48:22.142560 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-03 00:48:22.142571 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-03 00:48:22.142587 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-03 00:48:22.142597 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-03 00:48:22.142607 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-03 00:48:22.142616 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-03 00:48:22.142626 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-03 00:48:22.142636 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-03 00:48:22.142646 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-03 00:48:22.142655 | orchestrator | 2025-09-03 00:48:22.142665 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-03 00:48:22.142675 | orchestrator | Wednesday 03 September 2025 00:46:31 +0000 (0:00:18.226) 0:00:32.828 *** 2025-09-03 00:48:22.142685 | orchestrator | 2025-09-03 00:48:22.142695 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-03 00:48:22.142704 | orchestrator | Wednesday 03 September 2025 00:46:31 +0000 (0:00:00.239) 0:00:33.068 *** 2025-09-03 00:48:22.142714 | orchestrator | 2025-09-03 00:48:22.142724 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-03 00:48:22.142733 | orchestrator | Wednesday 03 September 2025 00:46:31 +0000 (0:00:00.070) 0:00:33.138 *** 2025-09-03 00:48:22.142743 | orchestrator | 2025-09-03 00:48:22.142752 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-03 00:48:22.142762 | orchestrator | Wednesday 03 September 2025 00:46:31 +0000 (0:00:00.080) 0:00:33.218 *** 2025-09-03 00:48:22.142772 | orchestrator | 2025-09-03 00:48:22.142781 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-03 00:48:22.142791 | orchestrator | Wednesday 03 September 2025 00:46:31 +0000 (0:00:00.069) 0:00:33.287 *** 2025-09-03 00:48:22.142801 | orchestrator | 2025-09-03 00:48:22.142811 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-03 00:48:22.142820 | orchestrator | Wednesday 03 September 2025 00:46:31 +0000 (0:00:00.065) 0:00:33.353 *** 2025-09-03 00:48:22.142830 | orchestrator | 2025-09-03 00:48:22.142840 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-03 00:48:22.142854 | orchestrator | Wednesday 03 September 2025 00:46:31 +0000 (0:00:00.058) 0:00:33.411 *** 2025-09-03 00:48:22.142864 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:48:22.142873 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:48:22.142884 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:48:22.142893 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:48:22.142903 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:48:22.142913 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:48:22.142923 | orchestrator | 2025-09-03 00:48:22.142932 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-03 00:48:22.142942 | orchestrator | Wednesday 03 September 2025 00:46:33 +0000 (0:00:01.639) 0:00:35.050 *** 2025-09-03 00:48:22.142952 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:48:22.142962 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:48:22.142971 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:48:22.142981 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:48:22.142991 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:48:22.143018 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:48:22.143028 | orchestrator | 2025-09-03 00:48:22.143038 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-03 00:48:22.143054 | orchestrator | 2025-09-03 00:48:22.143064 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-03 00:48:22.143073 | orchestrator | Wednesday 03 September 2025 00:47:07 +0000 (0:00:34.735) 0:01:09.786 *** 2025-09-03 00:48:22.143083 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:48:22.143092 | orchestrator | 2025-09-03 00:48:22.143102 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-03 00:48:22.143112 | orchestrator | Wednesday 03 September 2025 00:47:08 +0000 (0:00:00.693) 0:01:10.479 *** 2025-09-03 00:48:22.143121 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:48:22.143131 | orchestrator | 2025-09-03 00:48:22.143141 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-03 00:48:22.143150 | orchestrator | Wednesday 03 September 2025 00:47:09 +0000 (0:00:00.506) 0:01:10.986 *** 2025-09-03 00:48:22.143160 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:48:22.143170 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:48:22.143180 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:48:22.143189 | orchestrator | 2025-09-03 00:48:22.143199 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-03 00:48:22.143209 | orchestrator | Wednesday 03 September 2025 00:47:10 +0000 (0:00:00.929) 0:01:11.916 *** 2025-09-03 00:48:22.143219 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:48:22.143228 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:48:22.143238 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:48:22.143253 | orchestrator | 2025-09-03 00:48:22.143264 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-03 00:48:22.143273 | orchestrator | Wednesday 03 September 2025 00:47:10 +0000 (0:00:00.308) 0:01:12.224 *** 2025-09-03 00:48:22.143283 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:48:22.143293 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:48:22.143302 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:48:22.143312 | orchestrator | 2025-09-03 00:48:22.143322 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-03 00:48:22.143332 | orchestrator | Wednesday 03 September 2025 00:47:10 +0000 (0:00:00.309) 0:01:12.534 *** 2025-09-03 00:48:22.143342 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:48:22.143351 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:48:22.143361 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:48:22.143371 | orchestrator | 2025-09-03 00:48:22.143381 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-03 00:48:22.143391 | orchestrator | Wednesday 03 September 2025 00:47:11 +0000 (0:00:00.295) 0:01:12.829 *** 2025-09-03 00:48:22.143401 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:48:22.143410 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:48:22.143420 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:48:22.143430 | orchestrator | 2025-09-03 00:48:22.143440 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-03 00:48:22.143449 | orchestrator | Wednesday 03 September 2025 00:47:11 +0000 (0:00:00.452) 0:01:13.282 *** 2025-09-03 00:48:22.143459 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:48:22.143469 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:48:22.143479 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:48:22.143489 | orchestrator | 2025-09-03 00:48:22.143498 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-03 00:48:22.143508 | orchestrator | Wednesday 03 September 2025 00:47:11 +0000 (0:00:00.279) 0:01:13.561 *** 2025-09-03 00:48:22.143518 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:48:22.143528 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:48:22.143538 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:48:22.143547 | orchestrator | 2025-09-03 00:48:22.143557 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-03 00:48:22.143567 | orchestrator | Wednesday 03 September 2025 00:47:12 +0000 (0:00:00.259) 0:01:13.821 *** 2025-09-03 00:48:22.143583 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:48:22.143593 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:48:22.143603 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:48:22.143613 | orchestrator | 2025-09-03 00:48:22.143622 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-03 00:48:22.143632 | orchestrator | Wednesday 03 September 2025 00:47:12 +0000 (0:00:00.296) 0:01:14.117 *** 2025-09-03 00:48:22.143642 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:48:22.143651 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:48:22.143661 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:48:22.143671 | orchestrator | 2025-09-03 00:48:22.143681 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-03 00:48:22.143691 | orchestrator | Wednesday 03 September 2025 00:47:12 +0000 (0:00:00.450) 0:01:14.567 *** 2025-09-03 00:48:22.143700 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:48:22.143710 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:48:22.143720 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:48:22.143729 | orchestrator | 2025-09-03 00:48:22.143764 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-03 00:48:22.143786 | orchestrator | Wednesday 03 September 2025 00:47:13 +0000 (0:00:00.293) 0:01:14.861 *** 2025-09-03 00:48:22.143803 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:48:22.143820 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:48:22.143837 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:48:22.143852 | orchestrator | 2025-09-03 00:48:22.143863 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-03 00:48:22.143874 | orchestrator | Wednesday 03 September 2025 00:47:13 +0000 (0:00:00.272) 0:01:15.134 *** 2025-09-03 00:48:22.143885 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:48:22.143895 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:48:22.143906 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:48:22.143917 | orchestrator | 2025-09-03 00:48:22.143928 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-03 00:48:22.143938 | orchestrator | Wednesday 03 September 2025 00:47:13 +0000 (0:00:00.276) 0:01:15.410 *** 2025-09-03 00:48:22.143949 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:48:22.143960 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:48:22.143971 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:48:22.143981 | orchestrator | 2025-09-03 00:48:22.143992 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-03 00:48:22.144051 | orchestrator | Wednesday 03 September 2025 00:47:13 +0000 (0:00:00.279) 0:01:15.690 *** 2025-09-03 00:48:22.144064 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:48:22.144076 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:48:22.144086 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:48:22.144097 | orchestrator | 2025-09-03 00:48:22.144108 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-03 00:48:22.144119 | orchestrator | Wednesday 03 September 2025 00:47:14 +0000 (0:00:00.486) 0:01:16.177 *** 2025-09-03 00:48:22.144130 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:48:22.144141 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:48:22.144152 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:48:22.144162 | orchestrator | 2025-09-03 00:48:22.144173 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-03 00:48:22.144184 | orchestrator | Wednesday 03 September 2025 00:47:14 +0000 (0:00:00.316) 0:01:16.494 *** 2025-09-03 00:48:22.144195 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:48:22.144206 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:48:22.144217 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:48:22.144228 | orchestrator | 2025-09-03 00:48:22.144239 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-03 00:48:22.144249 | orchestrator | Wednesday 03 September 2025 00:47:15 +0000 (0:00:00.332) 0:01:16.826 *** 2025-09-03 00:48:22.144269 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:48:22.144280 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:48:22.144299 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:48:22.144311 | orchestrator | 2025-09-03 00:48:22.144322 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-03 00:48:22.144333 | orchestrator | Wednesday 03 September 2025 00:47:15 +0000 (0:00:00.284) 0:01:17.111 *** 2025-09-03 00:48:22.144360 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:48:22.144372 | orchestrator | 2025-09-03 00:48:22.144383 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-03 00:48:22.144394 | orchestrator | Wednesday 03 September 2025 00:47:16 +0000 (0:00:00.843) 0:01:17.954 *** 2025-09-03 00:48:22.144405 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:48:22.144416 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:48:22.144427 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:48:22.144438 | orchestrator | 2025-09-03 00:48:22.144449 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-03 00:48:22.144461 | orchestrator | Wednesday 03 September 2025 00:47:16 +0000 (0:00:00.507) 0:01:18.462 *** 2025-09-03 00:48:22.144472 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:48:22.144483 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:48:22.144494 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:48:22.144505 | orchestrator | 2025-09-03 00:48:22.144516 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-03 00:48:22.144527 | orchestrator | Wednesday 03 September 2025 00:47:17 +0000 (0:00:00.370) 0:01:18.832 *** 2025-09-03 00:48:22.144538 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:48:22.144549 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:48:22.144560 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:48:22.144571 | orchestrator | 2025-09-03 00:48:22.144582 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-03 00:48:22.144593 | orchestrator | Wednesday 03 September 2025 00:47:17 +0000 (0:00:00.421) 0:01:19.253 *** 2025-09-03 00:48:22.144604 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:48:22.144615 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:48:22.144626 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:48:22.144637 | orchestrator | 2025-09-03 00:48:22.144648 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-03 00:48:22.144659 | orchestrator | Wednesday 03 September 2025 00:47:17 +0000 (0:00:00.329) 0:01:19.583 *** 2025-09-03 00:48:22.144670 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:48:22.144681 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:48:22.144692 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:48:22.144703 | orchestrator | 2025-09-03 00:48:22.144714 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-03 00:48:22.144725 | orchestrator | Wednesday 03 September 2025 00:47:18 +0000 (0:00:00.270) 0:01:19.854 *** 2025-09-03 00:48:22.144736 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:48:22.144747 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:48:22.144759 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:48:22.144770 | orchestrator | 2025-09-03 00:48:22.144781 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-03 00:48:22.144792 | orchestrator | Wednesday 03 September 2025 00:47:18 +0000 (0:00:00.285) 0:01:20.139 *** 2025-09-03 00:48:22.144803 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:48:22.144814 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:48:22.144826 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:48:22.144837 | orchestrator | 2025-09-03 00:48:22.144853 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-03 00:48:22.144864 | orchestrator | Wednesday 03 September 2025 00:47:18 +0000 (0:00:00.463) 0:01:20.603 *** 2025-09-03 00:48:22.144875 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:48:22.144887 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:48:22.144910 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:48:22.144921 | orchestrator | 2025-09-03 00:48:22.144933 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-03 00:48:22.144944 | orchestrator | Wednesday 03 September 2025 00:47:19 +0000 (0:00:00.277) 0:01:20.881 *** 2025-09-03 00:48:22.144956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.144971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.144982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145110 | orchestrator | 2025-09-03 00:48:22.145122 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-03 00:48:22.145133 | orchestrator | Wednesday 03 September 2025 00:47:20 +0000 (0:00:01.287) 0:01:22.168 *** 2025-09-03 00:48:22.145150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145267 | orchestrator | 2025-09-03 00:48:22.145278 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-03 00:48:22.145290 | orchestrator | Wednesday 03 September 2025 00:47:23 +0000 (0:00:03.577) 0:01:25.746 *** 2025-09-03 00:48:22.145301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.145418 | orchestrator | 2025-09-03 00:48:22.145429 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-03 00:48:22.145447 | orchestrator | Wednesday 03 September 2025 00:47:26 +0000 (0:00:02.069) 0:01:27.816 *** 2025-09-03 00:48:22.145458 | orchestrator | 2025-09-03 00:48:22.145470 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-03 00:48:22.145481 | orchestrator | Wednesday 03 September 2025 00:47:26 +0000 (0:00:00.477) 0:01:28.293 *** 2025-09-03 00:48:22.145492 | orchestrator | 2025-09-03 00:48:22.145503 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-03 00:48:22.145514 | orchestrator | Wednesday 03 September 2025 00:47:26 +0000 (0:00:00.095) 0:01:28.389 *** 2025-09-03 00:48:22.145525 | orchestrator | 2025-09-03 00:48:22.145536 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-03 00:48:22.145547 | orchestrator | Wednesday 03 September 2025 00:47:26 +0000 (0:00:00.084) 0:01:28.473 *** 2025-09-03 00:48:22.145558 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:48:22.145568 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:48:22.145580 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:48:22.145590 | orchestrator | 2025-09-03 00:48:22.145602 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-03 00:48:22.145612 | orchestrator | Wednesday 03 September 2025 00:47:34 +0000 (0:00:07.606) 0:01:36.080 *** 2025-09-03 00:48:22.145624 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:48:22.145634 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:48:22.145650 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:48:22.145661 | orchestrator | 2025-09-03 00:48:22.145672 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-03 00:48:22.145683 | orchestrator | Wednesday 03 September 2025 00:47:41 +0000 (0:00:07.455) 0:01:43.536 *** 2025-09-03 00:48:22.145694 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:48:22.145705 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:48:22.145716 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:48:22.145727 | orchestrator | 2025-09-03 00:48:22.145738 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-03 00:48:22.145749 | orchestrator | Wednesday 03 September 2025 00:47:44 +0000 (0:00:02.428) 0:01:45.964 *** 2025-09-03 00:48:22.145760 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:48:22.145771 | orchestrator | 2025-09-03 00:48:22.145782 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-03 00:48:22.145793 | orchestrator | Wednesday 03 September 2025 00:47:44 +0000 (0:00:00.104) 0:01:46.068 *** 2025-09-03 00:48:22.145804 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:48:22.145815 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:48:22.145827 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:48:22.145837 | orchestrator | 2025-09-03 00:48:22.145848 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-03 00:48:22.145859 | orchestrator | Wednesday 03 September 2025 00:47:45 +0000 (0:00:01.326) 0:01:47.394 *** 2025-09-03 00:48:22.145870 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:48:22.145881 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:48:22.145892 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:48:22.145903 | orchestrator | 2025-09-03 00:48:22.145914 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-03 00:48:22.145925 | orchestrator | Wednesday 03 September 2025 00:47:46 +0000 (0:00:00.668) 0:01:48.062 *** 2025-09-03 00:48:22.145936 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:48:22.145947 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:48:22.145958 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:48:22.145969 | orchestrator | 2025-09-03 00:48:22.145980 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-03 00:48:22.145991 | orchestrator | Wednesday 03 September 2025 00:47:47 +0000 (0:00:00.843) 0:01:48.905 *** 2025-09-03 00:48:22.146059 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:48:22.146074 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:48:22.146085 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:48:22.146103 | orchestrator | 2025-09-03 00:48:22.146114 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-03 00:48:22.146125 | orchestrator | Wednesday 03 September 2025 00:47:47 +0000 (0:00:00.584) 0:01:49.490 *** 2025-09-03 00:48:22.146136 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:48:22.146147 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:48:22.146167 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:48:22.146178 | orchestrator | 2025-09-03 00:48:22.146190 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-03 00:48:22.146201 | orchestrator | Wednesday 03 September 2025 00:47:48 +0000 (0:00:01.101) 0:01:50.591 *** 2025-09-03 00:48:22.146212 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:48:22.146223 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:48:22.146234 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:48:22.146245 | orchestrator | 2025-09-03 00:48:22.146256 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-03 00:48:22.146267 | orchestrator | Wednesday 03 September 2025 00:47:49 +0000 (0:00:00.774) 0:01:51.365 *** 2025-09-03 00:48:22.146278 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:48:22.146289 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:48:22.146300 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:48:22.146311 | orchestrator | 2025-09-03 00:48:22.146322 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-03 00:48:22.146334 | orchestrator | Wednesday 03 September 2025 00:47:49 +0000 (0:00:00.315) 0:01:51.680 *** 2025-09-03 00:48:22.146345 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146356 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146368 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146379 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146396 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146408 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146420 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146437 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146456 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146468 | orchestrator | 2025-09-03 00:48:22.146480 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-03 00:48:22.146491 | orchestrator | Wednesday 03 September 2025 00:47:51 +0000 (0:00:01.402) 0:01:53.082 *** 2025-09-03 00:48:22.146502 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146513 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146524 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146536 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146574 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146614 | orchestrator | 2025-09-03 00:48:22.146625 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-03 00:48:22.146636 | orchestrator | Wednesday 03 September 2025 00:47:55 +0000 (0:00:03.847) 0:01:56.929 *** 2025-09-03 00:48:22.146654 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146666 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146677 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146688 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146744 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:48:22.146767 | orchestrator | 2025-09-03 00:48:22.146778 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-03 00:48:22.146789 | orchestrator | Wednesday 03 September 2025 00:47:57 +0000 (0:00:02.800) 0:01:59.730 *** 2025-09-03 00:48:22.146800 | orchestrator | 2025-09-03 00:48:22.146811 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-03 00:48:22.146822 | orchestrator | Wednesday 03 September 2025 00:47:58 +0000 (0:00:00.087) 0:01:59.817 *** 2025-09-03 00:48:22.146833 | orchestrator | 2025-09-03 00:48:22.146844 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-03 00:48:22.146855 | orchestrator | Wednesday 03 September 2025 00:47:58 +0000 (0:00:00.074) 0:01:59.892 *** 2025-09-03 00:48:22.146866 | orchestrator | 2025-09-03 00:48:22.146877 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-03 00:48:22.146887 | orchestrator | Wednesday 03 September 2025 00:47:58 +0000 (0:00:00.071) 0:01:59.963 *** 2025-09-03 00:48:22.146899 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:48:22.146910 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:48:22.146921 | orchestrator | 2025-09-03 00:48:22.146938 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-03 00:48:22.146949 | orchestrator | Wednesday 03 September 2025 00:48:04 +0000 (0:00:06.253) 0:02:06.217 *** 2025-09-03 00:48:22.146960 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:48:22.146971 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:48:22.146982 | orchestrator | 2025-09-03 00:48:22.146993 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-03 00:48:22.147019 | orchestrator | Wednesday 03 September 2025 00:48:10 +0000 (0:00:06.066) 0:02:12.284 *** 2025-09-03 00:48:22.147030 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:48:22.147041 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:48:22.147052 | orchestrator | 2025-09-03 00:48:22.147063 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-03 00:48:22.147074 | orchestrator | Wednesday 03 September 2025 00:48:16 +0000 (0:00:06.467) 0:02:18.751 *** 2025-09-03 00:48:22.147085 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:48:22.147096 | orchestrator | 2025-09-03 00:48:22.147106 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-03 00:48:22.147117 | orchestrator | Wednesday 03 September 2025 00:48:17 +0000 (0:00:00.110) 0:02:18.862 *** 2025-09-03 00:48:22.147128 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:48:22.147139 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:48:22.147150 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:48:22.147161 | orchestrator | 2025-09-03 00:48:22.147172 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-03 00:48:22.147182 | orchestrator | Wednesday 03 September 2025 00:48:17 +0000 (0:00:00.746) 0:02:19.608 *** 2025-09-03 00:48:22.147193 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:48:22.147204 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:48:22.147222 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:48:22.147233 | orchestrator | 2025-09-03 00:48:22.147244 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-03 00:48:22.147255 | orchestrator | Wednesday 03 September 2025 00:48:18 +0000 (0:00:00.523) 0:02:20.132 *** 2025-09-03 00:48:22.147266 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:48:22.147277 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:48:22.147288 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:48:22.147299 | orchestrator | 2025-09-03 00:48:22.147310 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-03 00:48:22.147321 | orchestrator | Wednesday 03 September 2025 00:48:19 +0000 (0:00:00.686) 0:02:20.818 *** 2025-09-03 00:48:22.147332 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:48:22.147343 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:48:22.147354 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:48:22.147365 | orchestrator | 2025-09-03 00:48:22.147376 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-03 00:48:22.147387 | orchestrator | Wednesday 03 September 2025 00:48:19 +0000 (0:00:00.747) 0:02:21.565 *** 2025-09-03 00:48:22.147398 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:48:22.147409 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:48:22.147420 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:48:22.147431 | orchestrator | 2025-09-03 00:48:22.147442 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-03 00:48:22.147453 | orchestrator | Wednesday 03 September 2025 00:48:20 +0000 (0:00:00.655) 0:02:22.220 *** 2025-09-03 00:48:22.147464 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:48:22.147476 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:48:22.147487 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:48:22.147498 | orchestrator | 2025-09-03 00:48:22.147517 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:48:22.147528 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-03 00:48:22.147540 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-03 00:48:22.147551 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-03 00:48:22.147562 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:48:22.147573 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:48:22.147584 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:48:22.147595 | orchestrator | 2025-09-03 00:48:22.147606 | orchestrator | 2025-09-03 00:48:22.147617 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:48:22.147629 | orchestrator | Wednesday 03 September 2025 00:48:21 +0000 (0:00:00.827) 0:02:23.048 *** 2025-09-03 00:48:22.147640 | orchestrator | =============================================================================== 2025-09-03 00:48:22.147651 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 34.74s 2025-09-03 00:48:22.147662 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.23s 2025-09-03 00:48:22.147673 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.86s 2025-09-03 00:48:22.147683 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.52s 2025-09-03 00:48:22.147694 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.90s 2025-09-03 00:48:22.147705 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.85s 2025-09-03 00:48:22.147725 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.58s 2025-09-03 00:48:22.147742 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.80s 2025-09-03 00:48:22.147753 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.64s 2025-09-03 00:48:22.147764 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.07s 2025-09-03 00:48:22.147775 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.91s 2025-09-03 00:48:22.147786 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.66s 2025-09-03 00:48:22.147796 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.66s 2025-09-03 00:48:22.147807 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.64s 2025-09-03 00:48:22.147818 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.54s 2025-09-03 00:48:22.147829 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.50s 2025-09-03 00:48:22.147840 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.40s 2025-09-03 00:48:22.147851 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 1.33s 2025-09-03 00:48:22.147861 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.29s 2025-09-03 00:48:22.147872 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.27s 2025-09-03 00:48:22.147883 | orchestrator | 2025-09-03 00:48:22 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:48:25.180936 | orchestrator | 2025-09-03 00:48:25 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:48:25.183331 | orchestrator | 2025-09-03 00:48:25 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:48:25.183361 | orchestrator | 2025-09-03 00:48:25 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:48:28.260618 | orchestrator | 2025-09-03 00:48:28 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:48:28.261301 | orchestrator | 2025-09-03 00:48:28 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:48:28.261344 | orchestrator | 2025-09-03 00:48:28 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:48:31.311885 | orchestrator | 2025-09-03 00:48:31 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:48:31.312796 | orchestrator | 2025-09-03 00:48:31 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:48:31.312829 | orchestrator | 2025-09-03 00:48:31 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:48:34.365317 | orchestrator | 2025-09-03 00:48:34 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:48:34.367091 | orchestrator | 2025-09-03 00:48:34 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:48:34.367126 | orchestrator | 2025-09-03 00:48:34 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:48:37.414745 | orchestrator | 2025-09-03 00:48:37 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:48:37.417784 | orchestrator | 2025-09-03 00:48:37 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:48:37.417810 | orchestrator | 2025-09-03 00:48:37 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:48:40.473993 | orchestrator | 2025-09-03 00:48:40 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:48:40.475309 | orchestrator | 2025-09-03 00:48:40 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:48:40.475339 | orchestrator | 2025-09-03 00:48:40 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:48:43.515259 | orchestrator | 2025-09-03 00:48:43 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:48:43.516333 | orchestrator | 2025-09-03 00:48:43 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:48:43.516673 | orchestrator | 2025-09-03 00:48:43 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:48:46.562906 | orchestrator | 2025-09-03 00:48:46 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:48:46.568386 | orchestrator | 2025-09-03 00:48:46 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:48:46.568481 | orchestrator | 2025-09-03 00:48:46 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:48:49.611935 | orchestrator | 2025-09-03 00:48:49 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:48:49.612056 | orchestrator | 2025-09-03 00:48:49 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:48:49.612067 | orchestrator | 2025-09-03 00:48:49 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:48:52.891855 | orchestrator | 2025-09-03 00:48:52 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:48:52.891967 | orchestrator | 2025-09-03 00:48:52 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:48:52.891983 | orchestrator | 2025-09-03 00:48:52 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:48:55.918347 | orchestrator | 2025-09-03 00:48:55 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:48:55.919067 | orchestrator | 2025-09-03 00:48:55 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:48:55.919098 | orchestrator | 2025-09-03 00:48:55 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:48:58.960215 | orchestrator | 2025-09-03 00:48:58 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:48:58.962497 | orchestrator | 2025-09-03 00:48:58 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:48:58.962555 | orchestrator | 2025-09-03 00:48:58 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:49:01.999793 | orchestrator | 2025-09-03 00:49:01 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:49:02.001164 | orchestrator | 2025-09-03 00:49:02 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:49:02.001198 | orchestrator | 2025-09-03 00:49:02 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:49:05.035773 | orchestrator | 2025-09-03 00:49:05 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:49:05.037614 | orchestrator | 2025-09-03 00:49:05 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:49:05.037643 | orchestrator | 2025-09-03 00:49:05 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:49:08.077106 | orchestrator | 2025-09-03 00:49:08 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:49:08.079628 | orchestrator | 2025-09-03 00:49:08 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:49:08.079657 | orchestrator | 2025-09-03 00:49:08 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:49:11.123695 | orchestrator | 2025-09-03 00:49:11 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:49:11.125210 | orchestrator | 2025-09-03 00:49:11 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:49:11.125269 | orchestrator | 2025-09-03 00:49:11 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:49:14.158181 | orchestrator | 2025-09-03 00:49:14 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:49:14.158575 | orchestrator | 2025-09-03 00:49:14 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:49:14.158656 | orchestrator | 2025-09-03 00:49:14 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:49:17.202936 | orchestrator | 2025-09-03 00:49:17 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:49:17.203096 | orchestrator | 2025-09-03 00:49:17 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:49:17.203114 | orchestrator | 2025-09-03 00:49:17 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:49:20.243698 | orchestrator | 2025-09-03 00:49:20 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:49:20.245415 | orchestrator | 2025-09-03 00:49:20 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:49:20.245444 | orchestrator | 2025-09-03 00:49:20 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:49:23.289845 | orchestrator | 2025-09-03 00:49:23 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:49:23.291744 | orchestrator | 2025-09-03 00:49:23 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:49:23.291819 | orchestrator | 2025-09-03 00:49:23 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:49:26.342351 | orchestrator | 2025-09-03 00:49:26 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:49:26.343721 | orchestrator | 2025-09-03 00:49:26 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:49:26.343895 | orchestrator | 2025-09-03 00:49:26 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:49:29.388268 | orchestrator | 2025-09-03 00:49:29 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:49:29.389333 | orchestrator | 2025-09-03 00:49:29 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:49:29.389682 | orchestrator | 2025-09-03 00:49:29 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:49:32.443573 | orchestrator | 2025-09-03 00:49:32 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:49:32.444838 | orchestrator | 2025-09-03 00:49:32 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:49:32.445147 | orchestrator | 2025-09-03 00:49:32 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:49:35.485724 | orchestrator | 2025-09-03 00:49:35 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:49:35.486952 | orchestrator | 2025-09-03 00:49:35 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:49:35.487149 | orchestrator | 2025-09-03 00:49:35 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:49:38.519906 | orchestrator | 2025-09-03 00:49:38 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:49:38.520930 | orchestrator | 2025-09-03 00:49:38 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:49:38.521387 | orchestrator | 2025-09-03 00:49:38 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:49:41.563324 | orchestrator | 2025-09-03 00:49:41 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:49:41.564911 | orchestrator | 2025-09-03 00:49:41 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:49:41.565069 | orchestrator | 2025-09-03 00:49:41 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:49:44.612777 | orchestrator | 2025-09-03 00:49:44 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:49:44.612891 | orchestrator | 2025-09-03 00:49:44 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:49:44.612907 | orchestrator | 2025-09-03 00:49:44 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:49:47.653460 | orchestrator | 2025-09-03 00:49:47 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:49:47.654521 | orchestrator | 2025-09-03 00:49:47 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:49:47.655039 | orchestrator | 2025-09-03 00:49:47 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:49:50.697774 | orchestrator | 2025-09-03 00:49:50 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:49:50.699151 | orchestrator | 2025-09-03 00:49:50 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:49:50.699186 | orchestrator | 2025-09-03 00:49:50 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:49:53.747563 | orchestrator | 2025-09-03 00:49:53 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:49:53.747667 | orchestrator | 2025-09-03 00:49:53 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:49:53.747682 | orchestrator | 2025-09-03 00:49:53 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:49:56.782651 | orchestrator | 2025-09-03 00:49:56 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:49:56.783491 | orchestrator | 2025-09-03 00:49:56 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:49:56.783524 | orchestrator | 2025-09-03 00:49:56 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:49:59.823591 | orchestrator | 2025-09-03 00:49:59 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:49:59.824744 | orchestrator | 2025-09-03 00:49:59 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:49:59.824845 | orchestrator | 2025-09-03 00:49:59 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:50:02.868271 | orchestrator | 2025-09-03 00:50:02 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:50:02.870181 | orchestrator | 2025-09-03 00:50:02 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:50:02.870221 | orchestrator | 2025-09-03 00:50:02 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:50:05.921706 | orchestrator | 2025-09-03 00:50:05 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:50:05.921794 | orchestrator | 2025-09-03 00:50:05 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:50:05.921810 | orchestrator | 2025-09-03 00:50:05 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:50:08.958374 | orchestrator | 2025-09-03 00:50:08 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:50:08.958764 | orchestrator | 2025-09-03 00:50:08 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:50:08.958793 | orchestrator | 2025-09-03 00:50:08 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:50:11.999935 | orchestrator | 2025-09-03 00:50:11 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:50:12.001598 | orchestrator | 2025-09-03 00:50:12 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:50:12.001631 | orchestrator | 2025-09-03 00:50:12 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:50:15.051741 | orchestrator | 2025-09-03 00:50:15 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:50:15.054118 | orchestrator | 2025-09-03 00:50:15 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:50:15.054938 | orchestrator | 2025-09-03 00:50:15 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:50:18.095226 | orchestrator | 2025-09-03 00:50:18 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:50:18.096137 | orchestrator | 2025-09-03 00:50:18 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:50:18.096169 | orchestrator | 2025-09-03 00:50:18 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:50:21.147659 | orchestrator | 2025-09-03 00:50:21 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:50:21.149373 | orchestrator | 2025-09-03 00:50:21 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:50:21.149693 | orchestrator | 2025-09-03 00:50:21 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:50:24.189433 | orchestrator | 2025-09-03 00:50:24 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:50:24.190295 | orchestrator | 2025-09-03 00:50:24 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:50:24.190330 | orchestrator | 2025-09-03 00:50:24 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:50:27.232846 | orchestrator | 2025-09-03 00:50:27 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:50:27.235572 | orchestrator | 2025-09-03 00:50:27 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:50:27.235610 | orchestrator | 2025-09-03 00:50:27 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:50:30.275664 | orchestrator | 2025-09-03 00:50:30 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:50:30.276749 | orchestrator | 2025-09-03 00:50:30 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:50:30.277033 | orchestrator | 2025-09-03 00:50:30 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:50:33.328378 | orchestrator | 2025-09-03 00:50:33 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:50:33.329094 | orchestrator | 2025-09-03 00:50:33 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:50:33.329127 | orchestrator | 2025-09-03 00:50:33 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:50:36.365950 | orchestrator | 2025-09-03 00:50:36 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:50:36.368495 | orchestrator | 2025-09-03 00:50:36 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:50:36.368528 | orchestrator | 2025-09-03 00:50:36 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:50:39.414337 | orchestrator | 2025-09-03 00:50:39 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:50:39.417552 | orchestrator | 2025-09-03 00:50:39 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:50:39.417609 | orchestrator | 2025-09-03 00:50:39 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:50:42.452365 | orchestrator | 2025-09-03 00:50:42 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:50:42.452800 | orchestrator | 2025-09-03 00:50:42 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:50:42.452833 | orchestrator | 2025-09-03 00:50:42 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:50:45.507447 | orchestrator | 2025-09-03 00:50:45 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:50:45.509707 | orchestrator | 2025-09-03 00:50:45 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:50:45.512345 | orchestrator | 2025-09-03 00:50:45 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:50:48.548901 | orchestrator | 2025-09-03 00:50:48 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:50:48.550672 | orchestrator | 2025-09-03 00:50:48 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:50:48.551213 | orchestrator | 2025-09-03 00:50:48 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:50:51.596323 | orchestrator | 2025-09-03 00:50:51 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:50:51.596634 | orchestrator | 2025-09-03 00:50:51 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:50:51.596661 | orchestrator | 2025-09-03 00:50:51 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:50:54.635375 | orchestrator | 2025-09-03 00:50:54 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state STARTED 2025-09-03 00:50:54.842949 | orchestrator | 2025-09-03 00:50:54 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:50:54.843061 | orchestrator | 2025-09-03 00:50:54 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:50:57.691569 | orchestrator | 2025-09-03 00:50:57 | INFO  | Task f3c465b7-dc1c-42cc-acd1-2e6db06e7dc6 is in state SUCCESS 2025-09-03 00:50:57.691755 | orchestrator | 2025-09-03 00:50:57.693276 | orchestrator | 2025-09-03 00:50:57.693311 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 00:50:57.693325 | orchestrator | 2025-09-03 00:50:57.693337 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 00:50:57.693349 | orchestrator | Wednesday 03 September 2025 00:44:51 +0000 (0:00:00.214) 0:00:00.214 *** 2025-09-03 00:50:57.693361 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:50:57.693375 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:50:57.693387 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:50:57.693399 | orchestrator | 2025-09-03 00:50:57.693411 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 00:50:57.693422 | orchestrator | Wednesday 03 September 2025 00:44:51 +0000 (0:00:00.404) 0:00:00.619 *** 2025-09-03 00:50:57.693434 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-03 00:50:57.693445 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-03 00:50:57.693473 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-03 00:50:57.693484 | orchestrator | 2025-09-03 00:50:57.693495 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-03 00:50:57.693506 | orchestrator | 2025-09-03 00:50:57.693517 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-03 00:50:57.693528 | orchestrator | Wednesday 03 September 2025 00:44:52 +0000 (0:00:00.665) 0:00:01.285 *** 2025-09-03 00:50:57.693539 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.693575 | orchestrator | 2025-09-03 00:50:57.693587 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-03 00:50:57.693597 | orchestrator | Wednesday 03 September 2025 00:44:52 +0000 (0:00:00.551) 0:00:01.837 *** 2025-09-03 00:50:57.693609 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:50:57.693620 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:50:57.693631 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:50:57.693642 | orchestrator | 2025-09-03 00:50:57.693653 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-03 00:50:57.693664 | orchestrator | Wednesday 03 September 2025 00:44:53 +0000 (0:00:00.676) 0:00:02.513 *** 2025-09-03 00:50:57.693675 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.693686 | orchestrator | 2025-09-03 00:50:57.693770 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-03 00:50:57.693815 | orchestrator | Wednesday 03 September 2025 00:44:54 +0000 (0:00:00.866) 0:00:03.380 *** 2025-09-03 00:50:57.693827 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:50:57.693838 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:50:57.693849 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:50:57.693910 | orchestrator | 2025-09-03 00:50:57.693924 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-03 00:50:57.693938 | orchestrator | Wednesday 03 September 2025 00:44:54 +0000 (0:00:00.722) 0:00:04.103 *** 2025-09-03 00:50:57.693951 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-03 00:50:57.693964 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-03 00:50:57.693997 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-03 00:50:57.694010 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-03 00:50:57.694075 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-03 00:50:57.694089 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-03 00:50:57.694102 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-03 00:50:57.694116 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-03 00:50:57.694128 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-03 00:50:57.694142 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-03 00:50:57.694154 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-03 00:50:57.694167 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-03 00:50:57.694180 | orchestrator | 2025-09-03 00:50:57.694193 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-03 00:50:57.694205 | orchestrator | Wednesday 03 September 2025 00:44:57 +0000 (0:00:02.692) 0:00:06.796 *** 2025-09-03 00:50:57.694218 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-03 00:50:57.694231 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-03 00:50:57.694243 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-03 00:50:57.694254 | orchestrator | 2025-09-03 00:50:57.694265 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-03 00:50:57.694276 | orchestrator | Wednesday 03 September 2025 00:44:58 +0000 (0:00:00.840) 0:00:07.636 *** 2025-09-03 00:50:57.694287 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-03 00:50:57.694298 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-03 00:50:57.694309 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-03 00:50:57.694320 | orchestrator | 2025-09-03 00:50:57.694331 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-03 00:50:57.694352 | orchestrator | Wednesday 03 September 2025 00:44:59 +0000 (0:00:01.294) 0:00:08.930 *** 2025-09-03 00:50:57.694363 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-03 00:50:57.694374 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.694398 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-03 00:50:57.694410 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.694421 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-03 00:50:57.694432 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.694443 | orchestrator | 2025-09-03 00:50:57.694454 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-03 00:50:57.694465 | orchestrator | Wednesday 03 September 2025 00:45:00 +0000 (0:00:00.601) 0:00:09.532 *** 2025-09-03 00:50:57.694486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-03 00:50:57.694502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-03 00:50:57.694514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-03 00:50:57.694526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-03 00:50:57.694584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-03 00:50:57.694614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-03 00:50:57.694627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-03 00:50:57.694645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-03 00:50:57.694657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-03 00:50:57.694669 | orchestrator | 2025-09-03 00:50:57.694680 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-03 00:50:57.694691 | orchestrator | Wednesday 03 September 2025 00:45:02 +0000 (0:00:01.811) 0:00:11.343 *** 2025-09-03 00:50:57.694702 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.694713 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.694724 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.694735 | orchestrator | 2025-09-03 00:50:57.694746 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-03 00:50:57.694758 | orchestrator | Wednesday 03 September 2025 00:45:03 +0000 (0:00:01.245) 0:00:12.589 *** 2025-09-03 00:50:57.694768 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-03 00:50:57.694780 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-03 00:50:57.694791 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-03 00:50:57.694848 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-03 00:50:57.694859 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-03 00:50:57.694870 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-03 00:50:57.694881 | orchestrator | 2025-09-03 00:50:57.694892 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-03 00:50:57.695050 | orchestrator | Wednesday 03 September 2025 00:45:06 +0000 (0:00:03.206) 0:00:15.795 *** 2025-09-03 00:50:57.695063 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.695074 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.695085 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.695096 | orchestrator | 2025-09-03 00:50:57.695107 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-03 00:50:57.695127 | orchestrator | Wednesday 03 September 2025 00:45:07 +0000 (0:00:01.148) 0:00:16.943 *** 2025-09-03 00:50:57.695138 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:50:57.695149 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:50:57.695160 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:50:57.695171 | orchestrator | 2025-09-03 00:50:57.695182 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-03 00:50:57.695193 | orchestrator | Wednesday 03 September 2025 00:45:09 +0000 (0:00:01.806) 0:00:18.750 *** 2025-09-03 00:50:57.695205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.695239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.695256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.695269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__caf7319e9ef5bfecaccfcc25b2366614166ff7a0', '__omit_place_holder__caf7319e9ef5bfecaccfcc25b2366614166ff7a0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-03 00:50:57.695281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.695292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.695311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.695323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__caf7319e9ef5bfecaccfcc25b2366614166ff7a0', '__omit_place_holder__caf7319e9ef5bfecaccfcc25b2366614166ff7a0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-03 00:50:57.695335 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.695346 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.695366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.695383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.695395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.695407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__caf7319e9ef5bfecaccfcc25b2366614166ff7a0', '__omit_place_holder__caf7319e9ef5bfecaccfcc25b2366614166ff7a0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-03 00:50:57.695424 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.695436 | orchestrator | 2025-09-03 00:50:57.695447 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-03 00:50:57.695458 | orchestrator | Wednesday 03 September 2025 00:45:10 +0000 (0:00:01.190) 0:00:19.941 *** 2025-09-03 00:50:57.695502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-03 00:50:57.695516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-03 00:50:57.695536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-03 00:50:57.695554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-03 00:50:57.695566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.695606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__caf7319e9ef5bfecaccfcc25b2366614166ff7a0', '__omit_place_holder__caf7319e9ef5bfecaccfcc25b2366614166ff7a0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-03 00:50:57.695671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-03 00:50:57.695684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.695695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__caf7319e9ef5bfecaccfcc25b2366614166ff7a0', '__omit_place_holder__caf7319e9ef5bfecaccfcc25b2366614166ff7a0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-03 00:50:57.695713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-03 00:50:57.695730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.695742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__caf7319e9ef5bfecaccfcc25b2366614166ff7a0', '__omit_place_holder__caf7319e9ef5bfecaccfcc25b2366614166ff7a0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-03 00:50:57.695770 | orchestrator | 2025-09-03 00:50:57.695782 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-03 00:50:57.695793 | orchestrator | Wednesday 03 September 2025 00:45:14 +0000 (0:00:03.383) 0:00:23.324 *** 2025-09-03 00:50:57.695805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-03 00:50:57.695816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-03 00:50:57.695828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-03 00:50:57.695847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-03 00:50:57.695866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-03 00:50:57.695877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-03 00:50:57.695896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-03 00:50:57.695908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-03 00:50:57.695919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-03 00:50:57.695930 | orchestrator | 2025-09-03 00:50:57.695941 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-03 00:50:57.695953 | orchestrator | Wednesday 03 September 2025 00:45:18 +0000 (0:00:03.844) 0:00:27.169 *** 2025-09-03 00:50:57.695964 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-03 00:50:57.695975 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-03 00:50:57.696065 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-03 00:50:57.696077 | orchestrator | 2025-09-03 00:50:57.696088 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-03 00:50:57.696099 | orchestrator | Wednesday 03 September 2025 00:45:21 +0000 (0:00:03.485) 0:00:30.655 *** 2025-09-03 00:50:57.696109 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-03 00:50:57.696120 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-03 00:50:57.696229 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-03 00:50:57.696241 | orchestrator | 2025-09-03 00:50:57.696376 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-03 00:50:57.696390 | orchestrator | Wednesday 03 September 2025 00:45:25 +0000 (0:00:03.859) 0:00:34.515 *** 2025-09-03 00:50:57.696401 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.696412 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.696423 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.696434 | orchestrator | 2025-09-03 00:50:57.696445 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-03 00:50:57.696456 | orchestrator | Wednesday 03 September 2025 00:45:26 +0000 (0:00:00.897) 0:00:35.412 *** 2025-09-03 00:50:57.696467 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-03 00:50:57.696493 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-03 00:50:57.696505 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-03 00:50:57.696516 | orchestrator | 2025-09-03 00:50:57.696526 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-03 00:50:57.696537 | orchestrator | Wednesday 03 September 2025 00:45:28 +0000 (0:00:02.550) 0:00:37.962 *** 2025-09-03 00:50:57.696548 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-03 00:50:57.696559 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-03 00:50:57.696570 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-03 00:50:57.696581 | orchestrator | 2025-09-03 00:50:57.696592 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-03 00:50:57.696603 | orchestrator | Wednesday 03 September 2025 00:45:31 +0000 (0:00:02.518) 0:00:40.480 *** 2025-09-03 00:50:57.696614 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-03 00:50:57.696625 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-03 00:50:57.696635 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-03 00:50:57.696646 | orchestrator | 2025-09-03 00:50:57.696657 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-03 00:50:57.696668 | orchestrator | Wednesday 03 September 2025 00:45:32 +0000 (0:00:01.350) 0:00:41.831 *** 2025-09-03 00:50:57.696680 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-03 00:50:57.696691 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-03 00:50:57.696702 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-03 00:50:57.696713 | orchestrator | 2025-09-03 00:50:57.696724 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-03 00:50:57.696765 | orchestrator | Wednesday 03 September 2025 00:45:34 +0000 (0:00:01.712) 0:00:43.544 *** 2025-09-03 00:50:57.696776 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.696788 | orchestrator | 2025-09-03 00:50:57.696799 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-03 00:50:57.696809 | orchestrator | Wednesday 03 September 2025 00:45:35 +0000 (0:00:00.619) 0:00:44.163 *** 2025-09-03 00:50:57.696821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-03 00:50:57.696832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-03 00:50:57.696852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-03 00:50:57.696926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-03 00:50:57.696941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-03 00:50:57.696953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-03 00:50:57.697058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-03 00:50:57.697072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-03 00:50:57.697084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-03 00:50:57.697102 | orchestrator | 2025-09-03 00:50:57.697113 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-03 00:50:57.697124 | orchestrator | Wednesday 03 September 2025 00:45:39 +0000 (0:00:04.199) 0:00:48.363 *** 2025-09-03 00:50:57.697146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.697164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.697176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.697187 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.697199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.697211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.697222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.697240 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.697252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.697271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.697288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.697300 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.697312 | orchestrator | 2025-09-03 00:50:57.697323 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-03 00:50:57.697334 | orchestrator | Wednesday 03 September 2025 00:45:40 +0000 (0:00:00.952) 0:00:49.316 *** 2025-09-03 00:50:57.697345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.697357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.697368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.697386 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.697398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.697417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.697434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.697446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.697458 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.697469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.697480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.697491 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.697503 | orchestrator | 2025-09-03 00:50:57.697520 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-03 00:50:57.697531 | orchestrator | Wednesday 03 September 2025 00:45:41 +0000 (0:00:01.258) 0:00:50.574 *** 2025-09-03 00:50:57.697543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.697562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.697574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.697585 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.697602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.697614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.697625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.697636 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.697791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.697805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.697824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.697836 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.697847 | orchestrator | 2025-09-03 00:50:57.697858 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-03 00:50:57.697870 | orchestrator | Wednesday 03 September 2025 00:45:43 +0000 (0:00:01.938) 0:00:52.513 *** 2025-09-03 00:50:57.697881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.697893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.697905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.697916 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.697967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.698006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.698068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.698083 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.698105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.698123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.698136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.698147 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.698158 | orchestrator | 2025-09-03 00:50:57.698169 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-03 00:50:57.698181 | orchestrator | Wednesday 03 September 2025 00:45:44 +0000 (0:00:01.547) 0:00:54.060 *** 2025-09-03 00:50:57.698200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.698212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.698223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.698235 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.701674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.701797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.701816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.701831 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.701846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.701880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.701893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.701904 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.701916 | orchestrator | 2025-09-03 00:50:57.701929 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-03 00:50:57.701942 | orchestrator | Wednesday 03 September 2025 00:45:46 +0000 (0:00:01.386) 0:00:55.446 *** 2025-09-03 00:50:57.701954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.702111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.702138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.702150 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.702161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.702184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.702195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.702206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.702218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.702237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.702250 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.702270 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.702282 | orchestrator | 2025-09-03 00:50:57.702298 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-03 00:50:57.702312 | orchestrator | Wednesday 03 September 2025 00:45:47 +0000 (0:00:00.782) 0:00:56.228 *** 2025-09-03 00:50:57.702325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.702343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.702355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.702367 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.702379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.702391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.702413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.702424 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.702441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.702458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.702470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.702481 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.702493 | orchestrator | 2025-09-03 00:50:57.702505 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-03 00:50:57.702516 | orchestrator | Wednesday 03 September 2025 00:45:47 +0000 (0:00:00.690) 0:00:56.919 *** 2025-09-03 00:50:57.702527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.702540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.702552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.702563 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.702581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.702602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.702613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.702623 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.702633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-03 00:50:57.702643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-03 00:50:57.702653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-03 00:50:57.702663 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.702673 | orchestrator | 2025-09-03 00:50:57.702682 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-03 00:50:57.702692 | orchestrator | Wednesday 03 September 2025 00:45:48 +0000 (0:00:00.870) 0:00:57.790 *** 2025-09-03 00:50:57.702702 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-03 00:50:57.702713 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-03 00:50:57.702729 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-03 00:50:57.702747 | orchestrator | 2025-09-03 00:50:57.702757 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-03 00:50:57.702766 | orchestrator | Wednesday 03 September 2025 00:45:50 +0000 (0:00:01.555) 0:00:59.345 *** 2025-09-03 00:50:57.702776 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-03 00:50:57.702786 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-03 00:50:57.702796 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-03 00:50:57.702806 | orchestrator | 2025-09-03 00:50:57.702816 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-03 00:50:57.702829 | orchestrator | Wednesday 03 September 2025 00:45:51 +0000 (0:00:01.671) 0:01:01.016 *** 2025-09-03 00:50:57.702839 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-03 00:50:57.702849 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-03 00:50:57.702859 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-03 00:50:57.702870 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.702880 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-03 00:50:57.702889 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-03 00:50:57.702899 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.702909 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-03 00:50:57.702919 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.702928 | orchestrator | 2025-09-03 00:50:57.702946 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-03 00:50:57.702956 | orchestrator | Wednesday 03 September 2025 00:45:53 +0000 (0:00:01.234) 0:01:02.251 *** 2025-09-03 00:50:57.702966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-03 00:50:57.703021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-03 00:50:57.703034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-03 00:50:57.703060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-03 00:50:57.703077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-03 00:50:57.703087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-03 00:50:57.703098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-03 00:50:57.703108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-03 00:50:57.703118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-03 00:50:57.703128 | orchestrator | 2025-09-03 00:50:57.703138 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-03 00:50:57.703148 | orchestrator | Wednesday 03 September 2025 00:45:56 +0000 (0:00:03.395) 0:01:05.646 *** 2025-09-03 00:50:57.703158 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.703175 | orchestrator | 2025-09-03 00:50:57.703185 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-03 00:50:57.703195 | orchestrator | Wednesday 03 September 2025 00:45:57 +0000 (0:00:00.750) 0:01:06.397 *** 2025-09-03 00:50:57.703206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-03 00:50:57.703224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-03 00:50:57.703240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.703251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.703262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-03 00:50:57.703272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-03 00:50:57.703290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.703328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-03 00:50:57.703344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.703354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-03 00:50:57.703365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.703375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.703396 | orchestrator | 2025-09-03 00:50:57.703406 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-03 00:50:57.703415 | orchestrator | Wednesday 03 September 2025 00:46:02 +0000 (0:00:04.806) 0:01:11.203 *** 2025-09-03 00:50:57.703426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-03 00:50:57.703444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-03 00:50:57.703459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.703470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.703479 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.703487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-03 00:50:57.703504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-03 00:50:57.703513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.703521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.703530 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.703548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-03 00:50:57.703557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-03 00:50:57.703565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.703573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.703587 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.703595 | orchestrator | 2025-09-03 00:50:57.703603 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-03 00:50:57.703612 | orchestrator | Wednesday 03 September 2025 00:46:03 +0000 (0:00:01.092) 0:01:12.296 *** 2025-09-03 00:50:57.703621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-03 00:50:57.703633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-03 00:50:57.703641 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.703650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-03 00:50:57.703658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-03 00:50:57.703666 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.703674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-03 00:50:57.703683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-03 00:50:57.703691 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.703699 | orchestrator | 2025-09-03 00:50:57.703713 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-03 00:50:57.703722 | orchestrator | Wednesday 03 September 2025 00:46:04 +0000 (0:00:01.226) 0:01:13.522 *** 2025-09-03 00:50:57.703730 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.703738 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.703747 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.703755 | orchestrator | 2025-09-03 00:50:57.703763 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-03 00:50:57.703771 | orchestrator | Wednesday 03 September 2025 00:46:06 +0000 (0:00:01.709) 0:01:15.232 *** 2025-09-03 00:50:57.703779 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.703787 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.703795 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.703803 | orchestrator | 2025-09-03 00:50:57.703811 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-03 00:50:57.703823 | orchestrator | Wednesday 03 September 2025 00:46:08 +0000 (0:00:02.416) 0:01:17.649 *** 2025-09-03 00:50:57.703831 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.703839 | orchestrator | 2025-09-03 00:50:57.703847 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-03 00:50:57.703855 | orchestrator | Wednesday 03 September 2025 00:46:09 +0000 (0:00:00.974) 0:01:18.623 *** 2025-09-03 00:50:57.703864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-03 00:50:57.703879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-03 00:50:57.703888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.703897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.703911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.703931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.703953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-03 00:50:57.703961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.703970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.703992 | orchestrator | 2025-09-03 00:50:57.704001 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-03 00:50:57.704010 | orchestrator | Wednesday 03 September 2025 00:46:13 +0000 (0:00:04.076) 0:01:22.699 *** 2025-09-03 00:50:57.704024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-03 00:50:57.704037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-03 00:50:57.704052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.704061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.704069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-03 00:50:57.704078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.704091 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.704100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.704119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.704127 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.704136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.704144 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.704152 | orchestrator | 2025-09-03 00:50:57.704160 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-03 00:50:57.704169 | orchestrator | Wednesday 03 September 2025 00:46:14 +0000 (0:00:01.094) 0:01:23.794 *** 2025-09-03 00:50:57.704177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-03 00:50:57.704187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-03 00:50:57.704195 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.704204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-03 00:50:57.704212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-03 00:50:57.704220 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.704228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-03 00:50:57.704236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-03 00:50:57.704244 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.704253 | orchestrator | 2025-09-03 00:50:57.704260 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-03 00:50:57.704269 | orchestrator | Wednesday 03 September 2025 00:46:15 +0000 (0:00:00.978) 0:01:24.773 *** 2025-09-03 00:50:57.704277 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.704285 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.704293 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.704301 | orchestrator | 2025-09-03 00:50:57.704309 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-03 00:50:57.704317 | orchestrator | Wednesday 03 September 2025 00:46:16 +0000 (0:00:01.165) 0:01:25.938 *** 2025-09-03 00:50:57.704331 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.704339 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.704347 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.704355 | orchestrator | 2025-09-03 00:50:57.704369 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-03 00:50:57.704377 | orchestrator | Wednesday 03 September 2025 00:46:18 +0000 (0:00:01.760) 0:01:27.699 *** 2025-09-03 00:50:57.704385 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.704394 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.704402 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.704410 | orchestrator | 2025-09-03 00:50:57.704418 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-03 00:50:57.704426 | orchestrator | Wednesday 03 September 2025 00:46:18 +0000 (0:00:00.252) 0:01:27.952 *** 2025-09-03 00:50:57.704434 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.704442 | orchestrator | 2025-09-03 00:50:57.704450 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-03 00:50:57.704462 | orchestrator | Wednesday 03 September 2025 00:46:19 +0000 (0:00:00.763) 0:01:28.716 *** 2025-09-03 00:50:57.704471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-03 00:50:57.704480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-03 00:50:57.704489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-03 00:50:57.704497 | orchestrator | 2025-09-03 00:50:57.704505 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-03 00:50:57.704513 | orchestrator | Wednesday 03 September 2025 00:46:22 +0000 (0:00:02.605) 0:01:31.321 *** 2025-09-03 00:50:57.704536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-03 00:50:57.704545 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.704558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-03 00:50:57.704567 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.704575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-03 00:50:57.704584 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.704591 | orchestrator | 2025-09-03 00:50:57.704599 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-03 00:50:57.704608 | orchestrator | Wednesday 03 September 2025 00:46:24 +0000 (0:00:01.860) 0:01:33.181 *** 2025-09-03 00:50:57.704616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-03 00:50:57.704628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-03 00:50:57.704636 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.704644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-03 00:50:57.704658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-03 00:50:57.704667 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.704679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-03 00:50:57.704688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-03 00:50:57.704697 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.704705 | orchestrator | 2025-09-03 00:50:57.704713 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-03 00:50:57.704721 | orchestrator | Wednesday 03 September 2025 00:46:25 +0000 (0:00:01.776) 0:01:34.958 *** 2025-09-03 00:50:57.704730 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.704737 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.704746 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.704754 | orchestrator | 2025-09-03 00:50:57.704762 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-03 00:50:57.704770 | orchestrator | Wednesday 03 September 2025 00:46:26 +0000 (0:00:00.657) 0:01:35.615 *** 2025-09-03 00:50:57.704778 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.704786 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.704794 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.704802 | orchestrator | 2025-09-03 00:50:57.704810 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-03 00:50:57.704818 | orchestrator | Wednesday 03 September 2025 00:46:27 +0000 (0:00:01.152) 0:01:36.767 *** 2025-09-03 00:50:57.704826 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.704833 | orchestrator | 2025-09-03 00:50:57.704841 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-03 00:50:57.704849 | orchestrator | Wednesday 03 September 2025 00:46:28 +0000 (0:00:00.712) 0:01:37.480 *** 2025-09-03 00:50:57.704857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-03 00:50:57.704872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.704901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.704917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.704930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-03 00:50:57.704939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.704948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.704961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.704975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-03 00:50:57.705008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705040 | orchestrator | 2025-09-03 00:50:57.705048 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-03 00:50:57.705056 | orchestrator | Wednesday 03 September 2025 00:46:31 +0000 (0:00:03.404) 0:01:40.885 *** 2025-09-03 00:50:57.705065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-03 00:50:57.705073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705109 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.705117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-03 00:50:57.705130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-03 00:50:57.705156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705187 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.705196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705212 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.705220 | orchestrator | 2025-09-03 00:50:57.705229 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-03 00:50:57.705237 | orchestrator | Wednesday 03 September 2025 00:46:32 +0000 (0:00:00.900) 0:01:41.786 *** 2025-09-03 00:50:57.705245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-03 00:50:57.705258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-03 00:50:57.705267 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.705275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-03 00:50:57.705283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-03 00:50:57.705291 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.705303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-03 00:50:57.705311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-03 00:50:57.705325 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.705333 | orchestrator | 2025-09-03 00:50:57.705341 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-03 00:50:57.705349 | orchestrator | Wednesday 03 September 2025 00:46:33 +0000 (0:00:00.854) 0:01:42.640 *** 2025-09-03 00:50:57.705357 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.705365 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.705373 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.705381 | orchestrator | 2025-09-03 00:50:57.705389 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-03 00:50:57.705397 | orchestrator | Wednesday 03 September 2025 00:46:34 +0000 (0:00:01.308) 0:01:43.948 *** 2025-09-03 00:50:57.705405 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.705413 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.705421 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.705429 | orchestrator | 2025-09-03 00:50:57.705437 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-03 00:50:57.705445 | orchestrator | Wednesday 03 September 2025 00:46:36 +0000 (0:00:01.987) 0:01:45.936 *** 2025-09-03 00:50:57.705453 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.705461 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.705469 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.705477 | orchestrator | 2025-09-03 00:50:57.705485 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-03 00:50:57.705493 | orchestrator | Wednesday 03 September 2025 00:46:37 +0000 (0:00:00.711) 0:01:46.647 *** 2025-09-03 00:50:57.705501 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.705509 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.705517 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.705525 | orchestrator | 2025-09-03 00:50:57.705533 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-03 00:50:57.705541 | orchestrator | Wednesday 03 September 2025 00:46:37 +0000 (0:00:00.391) 0:01:47.039 *** 2025-09-03 00:50:57.705549 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.705557 | orchestrator | 2025-09-03 00:50:57.705565 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-03 00:50:57.705573 | orchestrator | Wednesday 03 September 2025 00:46:38 +0000 (0:00:00.903) 0:01:47.942 *** 2025-09-03 00:50:57.705582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-03 00:50:57.705596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-03 00:50:57.705605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-03 00:50:57.705657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-03 00:50:57.705697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-03 00:50:57.705761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-03 00:50:57.705770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705817 | orchestrator | 2025-09-03 00:50:57.705825 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-03 00:50:57.705833 | orchestrator | Wednesday 03 September 2025 00:46:43 +0000 (0:00:04.474) 0:01:52.417 *** 2025-09-03 00:50:57.705852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-03 00:50:57.705861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-03 00:50:57.705869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-03 00:50:57.705900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-03 00:50:57.705926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.705960 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.705968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.706045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.706057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.706066 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.706079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-03 00:50:57.706087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-03 00:50:57.706096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.706104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.706117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.706131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.706143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.706152 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.706160 | orchestrator | 2025-09-03 00:50:57.706168 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-03 00:50:57.706177 | orchestrator | Wednesday 03 September 2025 00:46:44 +0000 (0:00:00.799) 0:01:53.217 *** 2025-09-03 00:50:57.706185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-03 00:50:57.706194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-03 00:50:57.706202 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.706211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-03 00:50:57.706219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-03 00:50:57.706227 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.706235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-03 00:50:57.706243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-03 00:50:57.706257 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.706265 | orchestrator | 2025-09-03 00:50:57.706273 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-03 00:50:57.706281 | orchestrator | Wednesday 03 September 2025 00:46:45 +0000 (0:00:00.954) 0:01:54.171 *** 2025-09-03 00:50:57.706289 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.706297 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.706306 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.706314 | orchestrator | 2025-09-03 00:50:57.706322 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-03 00:50:57.706330 | orchestrator | Wednesday 03 September 2025 00:46:46 +0000 (0:00:01.785) 0:01:55.956 *** 2025-09-03 00:50:57.706338 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.706346 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.706354 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.706362 | orchestrator | 2025-09-03 00:50:57.706370 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-03 00:50:57.706378 | orchestrator | Wednesday 03 September 2025 00:46:48 +0000 (0:00:01.782) 0:01:57.739 *** 2025-09-03 00:50:57.706385 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.706392 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.706399 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.706406 | orchestrator | 2025-09-03 00:50:57.706412 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-03 00:50:57.706419 | orchestrator | Wednesday 03 September 2025 00:46:49 +0000 (0:00:00.566) 0:01:58.306 *** 2025-09-03 00:50:57.706426 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.706433 | orchestrator | 2025-09-03 00:50:57.706439 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-03 00:50:57.706446 | orchestrator | Wednesday 03 September 2025 00:46:50 +0000 (0:00:00.812) 0:01:59.118 *** 2025-09-03 00:50:57.706468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-03 00:50:57.706478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-03 00:50:57.706501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-03 00:50:57.706509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-03 00:50:57.706527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-03 00:50:57.706539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-03 00:50:57.706551 | orchestrator | 2025-09-03 00:50:57.706558 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-03 00:50:57.706565 | orchestrator | Wednesday 03 September 2025 00:46:54 +0000 (0:00:04.017) 0:02:03.136 *** 2025-09-03 00:50:57.706577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-03 00:50:57.706595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-03 00:50:57.706612 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.706620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-03 00:50:57.706637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-03 00:50:57.706650 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.706658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-03 00:50:57.706674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-03 00:50:57.706683 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.706690 | orchestrator | 2025-09-03 00:50:57.706697 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-03 00:50:57.706703 | orchestrator | Wednesday 03 September 2025 00:46:57 +0000 (0:00:03.068) 0:02:06.205 *** 2025-09-03 00:50:57.706715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-03 00:50:57.706722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-03 00:50:57.706729 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.706736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-03 00:50:57.706744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-03 00:50:57.706751 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.706758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-03 00:50:57.706769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-03 00:50:57.706777 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.706784 | orchestrator | 2025-09-03 00:50:57.706791 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-03 00:50:57.706798 | orchestrator | Wednesday 03 September 2025 00:47:00 +0000 (0:00:03.134) 0:02:09.339 *** 2025-09-03 00:50:57.706804 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.706811 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.706818 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.706825 | orchestrator | 2025-09-03 00:50:57.706835 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-03 00:50:57.706847 | orchestrator | Wednesday 03 September 2025 00:47:01 +0000 (0:00:01.236) 0:02:10.575 *** 2025-09-03 00:50:57.706854 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.706861 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.706868 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.706875 | orchestrator | 2025-09-03 00:50:57.706882 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-03 00:50:57.706888 | orchestrator | Wednesday 03 September 2025 00:47:03 +0000 (0:00:01.955) 0:02:12.531 *** 2025-09-03 00:50:57.706895 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.706902 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.706908 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.706915 | orchestrator | 2025-09-03 00:50:57.706922 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-03 00:50:57.706928 | orchestrator | Wednesday 03 September 2025 00:47:03 +0000 (0:00:00.482) 0:02:13.014 *** 2025-09-03 00:50:57.706935 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.706942 | orchestrator | 2025-09-03 00:50:57.706948 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-03 00:50:57.706955 | orchestrator | Wednesday 03 September 2025 00:47:04 +0000 (0:00:00.808) 0:02:13.823 *** 2025-09-03 00:50:57.706962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-03 00:50:57.706969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-03 00:50:57.706992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-03 00:50:57.707000 | orchestrator | 2025-09-03 00:50:57.707007 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-03 00:50:57.707013 | orchestrator | Wednesday 03 September 2025 00:47:07 +0000 (0:00:02.981) 0:02:16.804 *** 2025-09-03 00:50:57.707029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-03 00:50:57.707047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-03 00:50:57.707055 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.707062 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.707069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-03 00:50:57.707076 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.707083 | orchestrator | 2025-09-03 00:50:57.707089 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-03 00:50:57.707096 | orchestrator | Wednesday 03 September 2025 00:47:08 +0000 (0:00:00.619) 0:02:17.424 *** 2025-09-03 00:50:57.707103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-03 00:50:57.707110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-03 00:50:57.707117 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.707124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-03 00:50:57.707130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-03 00:50:57.707137 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.707144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-03 00:50:57.707151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-03 00:50:57.707158 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.707164 | orchestrator | 2025-09-03 00:50:57.707171 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-03 00:50:57.707178 | orchestrator | Wednesday 03 September 2025 00:47:08 +0000 (0:00:00.617) 0:02:18.041 *** 2025-09-03 00:50:57.707184 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.707195 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.707202 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.707209 | orchestrator | 2025-09-03 00:50:57.707216 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-03 00:50:57.707222 | orchestrator | Wednesday 03 September 2025 00:47:10 +0000 (0:00:01.305) 0:02:19.347 *** 2025-09-03 00:50:57.707229 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.707236 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.707243 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.707249 | orchestrator | 2025-09-03 00:50:57.707256 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-03 00:50:57.707263 | orchestrator | Wednesday 03 September 2025 00:47:12 +0000 (0:00:02.095) 0:02:21.443 *** 2025-09-03 00:50:57.707280 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.707287 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.707298 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.707305 | orchestrator | 2025-09-03 00:50:57.707313 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-03 00:50:57.707319 | orchestrator | Wednesday 03 September 2025 00:47:12 +0000 (0:00:00.510) 0:02:21.954 *** 2025-09-03 00:50:57.707326 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.707333 | orchestrator | 2025-09-03 00:50:57.707340 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-03 00:50:57.707346 | orchestrator | Wednesday 03 September 2025 00:47:13 +0000 (0:00:00.901) 0:02:22.855 *** 2025-09-03 00:50:57.707358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-03 00:50:57.707373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-03 00:50:57.707399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-03 00:50:57.707412 | orchestrator | 2025-09-03 00:50:57.707419 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-03 00:50:57.707426 | orchestrator | Wednesday 03 September 2025 00:47:17 +0000 (0:00:03.671) 0:02:26.527 *** 2025-09-03 00:50:57.707443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-03 00:50:57.707451 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.707459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-03 00:50:57.707471 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.707488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-03 00:50:57.707503 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.707510 | orchestrator | 2025-09-03 00:50:57.707516 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-03 00:50:57.707523 | orchestrator | Wednesday 03 September 2025 00:47:18 +0000 (0:00:01.054) 0:02:27.581 *** 2025-09-03 00:50:57.707530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-03 00:50:57.707539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-03 00:50:57.707545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-03 00:50:57.707558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-03 00:50:57.707565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-03 00:50:57.707572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-03 00:50:57.707580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-03 00:50:57.707591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-03 00:50:57.707598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-03 00:50:57.707605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-03 00:50:57.707616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-03 00:50:57.707623 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.707630 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.707637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-03 00:50:57.707644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-03 00:50:57.707651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-03 00:50:57.707658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-03 00:50:57.707670 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.707677 | orchestrator | 2025-09-03 00:50:57.707683 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-03 00:50:57.707690 | orchestrator | Wednesday 03 September 2025 00:47:19 +0000 (0:00:00.865) 0:02:28.447 *** 2025-09-03 00:50:57.707697 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.707704 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.707711 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.707724 | orchestrator | 2025-09-03 00:50:57.707731 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-03 00:50:57.707737 | orchestrator | Wednesday 03 September 2025 00:47:20 +0000 (0:00:01.211) 0:02:29.658 *** 2025-09-03 00:50:57.707744 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.707751 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.707758 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.707764 | orchestrator | 2025-09-03 00:50:57.707771 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-03 00:50:57.707778 | orchestrator | Wednesday 03 September 2025 00:47:22 +0000 (0:00:01.929) 0:02:31.587 *** 2025-09-03 00:50:57.707785 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.707791 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.707798 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.707805 | orchestrator | 2025-09-03 00:50:57.707811 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-03 00:50:57.707818 | orchestrator | Wednesday 03 September 2025 00:47:22 +0000 (0:00:00.264) 0:02:31.852 *** 2025-09-03 00:50:57.707825 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.707832 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.707838 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.707845 | orchestrator | 2025-09-03 00:50:57.707851 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-03 00:50:57.707858 | orchestrator | Wednesday 03 September 2025 00:47:23 +0000 (0:00:00.452) 0:02:32.304 *** 2025-09-03 00:50:57.707865 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.707871 | orchestrator | 2025-09-03 00:50:57.707878 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-03 00:50:57.707885 | orchestrator | Wednesday 03 September 2025 00:47:24 +0000 (0:00:00.951) 0:02:33.256 *** 2025-09-03 00:50:57.707906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-03 00:50:57.707918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-03 00:50:57.707931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-03 00:50:57.707939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-03 00:50:57.707946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-03 00:50:57.707953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-03 00:50:57.707969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-03 00:50:57.708000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-03 00:50:57.708008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-03 00:50:57.708015 | orchestrator | 2025-09-03 00:50:57.708022 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-03 00:50:57.708029 | orchestrator | Wednesday 03 September 2025 00:47:28 +0000 (0:00:04.472) 0:02:37.729 *** 2025-09-03 00:50:57.708036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-03 00:50:57.708044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-03 00:50:57.708057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-03 00:50:57.708064 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.708076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-03 00:50:57.708090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-03 00:50:57.708097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-03 00:50:57.708104 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.708111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-03 00:50:57.708123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-03 00:50:57.708135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-03 00:50:57.708147 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.708154 | orchestrator | 2025-09-03 00:50:57.708161 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-03 00:50:57.708168 | orchestrator | Wednesday 03 September 2025 00:47:29 +0000 (0:00:00.688) 0:02:38.417 *** 2025-09-03 00:50:57.708175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-03 00:50:57.708182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-03 00:50:57.708189 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.708196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-03 00:50:57.708203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-03 00:50:57.708210 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.708217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-03 00:50:57.708224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-03 00:50:57.708231 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.708238 | orchestrator | 2025-09-03 00:50:57.708244 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-03 00:50:57.708251 | orchestrator | Wednesday 03 September 2025 00:47:29 +0000 (0:00:00.614) 0:02:39.032 *** 2025-09-03 00:50:57.708258 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.708264 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.708271 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.708278 | orchestrator | 2025-09-03 00:50:57.708285 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-03 00:50:57.708291 | orchestrator | Wednesday 03 September 2025 00:47:31 +0000 (0:00:01.195) 0:02:40.228 *** 2025-09-03 00:50:57.708298 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.708305 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.708311 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.708318 | orchestrator | 2025-09-03 00:50:57.708325 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-03 00:50:57.708332 | orchestrator | Wednesday 03 September 2025 00:47:32 +0000 (0:00:01.830) 0:02:42.058 *** 2025-09-03 00:50:57.708338 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.708345 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.708356 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.708363 | orchestrator | 2025-09-03 00:50:57.708370 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-03 00:50:57.708377 | orchestrator | Wednesday 03 September 2025 00:47:33 +0000 (0:00:00.383) 0:02:42.442 *** 2025-09-03 00:50:57.708383 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.708390 | orchestrator | 2025-09-03 00:50:57.708396 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-03 00:50:57.708403 | orchestrator | Wednesday 03 September 2025 00:47:34 +0000 (0:00:00.724) 0:02:43.167 *** 2025-09-03 00:50:57.708418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-03 00:50:57.708427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.708434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-03 00:50:57.708442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.708454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-03 00:50:57.708469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.708476 | orchestrator | 2025-09-03 00:50:57.708483 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-03 00:50:57.708490 | orchestrator | Wednesday 03 September 2025 00:47:37 +0000 (0:00:03.101) 0:02:46.268 *** 2025-09-03 00:50:57.708497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-03 00:50:57.708504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.708511 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.708518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-03 00:50:57.708538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.708545 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.708556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-03 00:50:57.708564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.708571 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.708577 | orchestrator | 2025-09-03 00:50:57.708584 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-03 00:50:57.708591 | orchestrator | Wednesday 03 September 2025 00:47:38 +0000 (0:00:00.848) 0:02:47.117 *** 2025-09-03 00:50:57.708599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-03 00:50:57.708606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-03 00:50:57.708613 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.708619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-03 00:50:57.708631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-03 00:50:57.708638 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.708645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-03 00:50:57.708651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-03 00:50:57.708658 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.708665 | orchestrator | 2025-09-03 00:50:57.708672 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-03 00:50:57.708679 | orchestrator | Wednesday 03 September 2025 00:47:38 +0000 (0:00:00.833) 0:02:47.950 *** 2025-09-03 00:50:57.708685 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.708692 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.708699 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.708705 | orchestrator | 2025-09-03 00:50:57.708712 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-03 00:50:57.708719 | orchestrator | Wednesday 03 September 2025 00:47:40 +0000 (0:00:01.248) 0:02:49.199 *** 2025-09-03 00:50:57.708725 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.708732 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.708739 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.708746 | orchestrator | 2025-09-03 00:50:57.708752 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-03 00:50:57.708759 | orchestrator | Wednesday 03 September 2025 00:47:42 +0000 (0:00:02.010) 0:02:51.210 *** 2025-09-03 00:50:57.708770 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.708777 | orchestrator | 2025-09-03 00:50:57.708784 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-03 00:50:57.708791 | orchestrator | Wednesday 03 September 2025 00:47:43 +0000 (0:00:01.210) 0:02:52.420 *** 2025-09-03 00:50:57.708801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-03 00:50:57.708809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-03 00:50:57.708820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.708828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.708835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.708847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.708858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.708865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.708872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-03 00:50:57.708886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.708893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.708905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.708912 | orchestrator | 2025-09-03 00:50:57.708919 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-03 00:50:57.708926 | orchestrator | Wednesday 03 September 2025 00:47:47 +0000 (0:00:04.086) 0:02:56.507 *** 2025-09-03 00:50:57.708936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-03 00:50:57.708943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.708954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.708961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.708968 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.708975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-03 00:50:57.709022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.709034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.709041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.709055 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.709062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-03 00:50:57.709069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.709076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.709088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.709095 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.709102 | orchestrator | 2025-09-03 00:50:57.709109 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-03 00:50:57.709116 | orchestrator | Wednesday 03 September 2025 00:47:48 +0000 (0:00:00.990) 0:02:57.497 *** 2025-09-03 00:50:57.709123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-03 00:50:57.709133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-03 00:50:57.709140 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.709147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-03 00:50:57.709160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-03 00:50:57.709167 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.709174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-03 00:50:57.709181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-03 00:50:57.709188 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.709194 | orchestrator | 2025-09-03 00:50:57.709201 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-03 00:50:57.709208 | orchestrator | Wednesday 03 September 2025 00:47:49 +0000 (0:00:01.195) 0:02:58.693 *** 2025-09-03 00:50:57.709215 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.709222 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.709228 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.709235 | orchestrator | 2025-09-03 00:50:57.709242 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-03 00:50:57.709249 | orchestrator | Wednesday 03 September 2025 00:47:50 +0000 (0:00:01.265) 0:02:59.958 *** 2025-09-03 00:50:57.709255 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.709262 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.709269 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.709275 | orchestrator | 2025-09-03 00:50:57.709282 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-03 00:50:57.709288 | orchestrator | Wednesday 03 September 2025 00:47:52 +0000 (0:00:02.015) 0:03:01.974 *** 2025-09-03 00:50:57.709295 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.709302 | orchestrator | 2025-09-03 00:50:57.709308 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-03 00:50:57.709315 | orchestrator | Wednesday 03 September 2025 00:47:54 +0000 (0:00:01.334) 0:03:03.308 *** 2025-09-03 00:50:57.709322 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-03 00:50:57.709328 | orchestrator | 2025-09-03 00:50:57.709335 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-03 00:50:57.709342 | orchestrator | Wednesday 03 September 2025 00:47:56 +0000 (0:00:02.701) 0:03:06.010 *** 2025-09-03 00:50:57.709354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-03 00:50:57.709372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-03 00:50:57.709380 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.709387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-03 00:50:57.709395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-03 00:50:57.709402 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.709420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-03 00:50:57.709433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-03 00:50:57.709441 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.709447 | orchestrator | 2025-09-03 00:50:57.709454 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-03 00:50:57.709461 | orchestrator | Wednesday 03 September 2025 00:47:59 +0000 (0:00:02.215) 0:03:08.225 *** 2025-09-03 00:50:57.709469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-03 00:50:57.709486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-03 00:50:57.709494 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.709505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-03 00:50:57.709512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-03 00:50:57.709520 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.709536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-03 00:50:57.709549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-03 00:50:57.709555 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.709562 | orchestrator | 2025-09-03 00:50:57.709568 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-03 00:50:57.709575 | orchestrator | Wednesday 03 September 2025 00:48:01 +0000 (0:00:02.204) 0:03:10.430 *** 2025-09-03 00:50:57.709581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-03 00:50:57.709588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-03 00:50:57.709595 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.709601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-03 00:50:57.709614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-03 00:50:57.709621 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.709779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-03 00:50:57.709796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-03 00:50:57.709809 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.709815 | orchestrator | 2025-09-03 00:50:57.709822 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-03 00:50:57.709828 | orchestrator | Wednesday 03 September 2025 00:48:04 +0000 (0:00:02.831) 0:03:13.261 *** 2025-09-03 00:50:57.709835 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.709841 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.709847 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.709853 | orchestrator | 2025-09-03 00:50:57.709859 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-03 00:50:57.709866 | orchestrator | Wednesday 03 September 2025 00:48:05 +0000 (0:00:01.796) 0:03:15.058 *** 2025-09-03 00:50:57.709872 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.709878 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.709884 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.709891 | orchestrator | 2025-09-03 00:50:57.709897 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-03 00:50:57.709903 | orchestrator | Wednesday 03 September 2025 00:48:07 +0000 (0:00:01.405) 0:03:16.463 *** 2025-09-03 00:50:57.709909 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.709916 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.709922 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.709928 | orchestrator | 2025-09-03 00:50:57.709934 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-03 00:50:57.709941 | orchestrator | Wednesday 03 September 2025 00:48:07 +0000 (0:00:00.316) 0:03:16.779 *** 2025-09-03 00:50:57.709947 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.709953 | orchestrator | 2025-09-03 00:50:57.709959 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-03 00:50:57.709965 | orchestrator | Wednesday 03 September 2025 00:48:08 +0000 (0:00:01.257) 0:03:18.037 *** 2025-09-03 00:50:57.709973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-03 00:50:57.710000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-03 00:50:57.710012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-03 00:50:57.710113 | orchestrator | 2025-09-03 00:50:57.710125 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-03 00:50:57.710136 | orchestrator | Wednesday 03 September 2025 00:48:10 +0000 (0:00:01.440) 0:03:19.477 *** 2025-09-03 00:50:57.710142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-03 00:50:57.710149 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.710156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-03 00:50:57.710168 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.710175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-03 00:50:57.710181 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.710187 | orchestrator | 2025-09-03 00:50:57.710194 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-03 00:50:57.710200 | orchestrator | Wednesday 03 September 2025 00:48:10 +0000 (0:00:00.410) 0:03:19.888 *** 2025-09-03 00:50:57.710207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-03 00:50:57.710213 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.710220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-03 00:50:57.710226 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.710239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-03 00:50:57.710245 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.710252 | orchestrator | 2025-09-03 00:50:57.710258 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-03 00:50:57.710264 | orchestrator | Wednesday 03 September 2025 00:48:11 +0000 (0:00:00.838) 0:03:20.726 *** 2025-09-03 00:50:57.710271 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.710277 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.710284 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.710290 | orchestrator | 2025-09-03 00:50:57.710296 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-03 00:50:57.710303 | orchestrator | Wednesday 03 September 2025 00:48:12 +0000 (0:00:00.434) 0:03:21.160 *** 2025-09-03 00:50:57.710309 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.710315 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.710322 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.710328 | orchestrator | 2025-09-03 00:50:57.710338 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-03 00:50:57.710345 | orchestrator | Wednesday 03 September 2025 00:48:13 +0000 (0:00:01.263) 0:03:22.424 *** 2025-09-03 00:50:57.710351 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.710357 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.710364 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.710370 | orchestrator | 2025-09-03 00:50:57.710377 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-03 00:50:57.710383 | orchestrator | Wednesday 03 September 2025 00:48:13 +0000 (0:00:00.335) 0:03:22.759 *** 2025-09-03 00:50:57.710389 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.710395 | orchestrator | 2025-09-03 00:50:57.710406 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-03 00:50:57.710413 | orchestrator | Wednesday 03 September 2025 00:48:15 +0000 (0:00:01.388) 0:03:24.148 *** 2025-09-03 00:50:57.710420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-03 00:50:57.710427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-03 00:50:57.710469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-03 00:50:57.710484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-03 00:50:57.710491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 00:50:57.710508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-03 00:50:57.710527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-03 00:50:57.710534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-03 00:50:57.710541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-03 00:50:57.710586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-03 00:50:57.710600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-03 00:50:57.710617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-03 00:50:57.710638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-03 00:50:57.710645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-03 00:50:57.710658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 00:50:57.710668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-03 00:50:57.710705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-03 00:50:57.710711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-03 00:50:57.710745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-03 00:50:57.710760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-03 00:50:57.710768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-03 00:50:57.710776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-03 00:50:57.710787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 00:50:57.710811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-03 00:50:57.710827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-03 00:50:57.710835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-03 00:50:57.710865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-03 00:50:57.710873 | orchestrator | 2025-09-03 00:50:57.710880 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-03 00:50:57.710888 | orchestrator | Wednesday 03 September 2025 00:48:18 +0000 (0:00:03.743) 0:03:27.891 *** 2025-09-03 00:50:57.710895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-03 00:50:57.710903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-03 00:50:57.710946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.710954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-03 00:50:57.710962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-03 00:50:57.710969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-03 00:50:57.711029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.711041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.711050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.711057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 00:50:57.711065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.711074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-03 00:50:57.711090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.711100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.711107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-03 00:50:57.711114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-03 00:50:57.711121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.711132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-03 00:50:57.711142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.711154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.711161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.711168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-03 00:50:57.711175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-03 00:50:57.711189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.711197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-03 00:50:57.711207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-03 00:50:57.711214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-03 00:50:57.711221 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.711227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-03 00:50:57.711234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-03 00:50:57.711241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.711252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.711265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 00:50:57.711273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.711279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 00:50:57.711286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-03 00:50:57.711298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.711305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-03 00:50:57.711315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.711325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-03 00:50:57.711332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-03 00:50:57.711339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-03 00:50:57.711345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-03 00:50:57.711358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.711364 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.711471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-03 00:50:57.711490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-03 00:50:57.711497 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.711504 | orchestrator | 2025-09-03 00:50:57.711510 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-03 00:50:57.711517 | orchestrator | Wednesday 03 September 2025 00:48:20 +0000 (0:00:01.358) 0:03:29.250 *** 2025-09-03 00:50:57.711523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-03 00:50:57.711530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-03 00:50:57.711536 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.711543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-03 00:50:57.711548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-03 00:50:57.711559 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.711565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-03 00:50:57.711570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-03 00:50:57.711576 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.711581 | orchestrator | 2025-09-03 00:50:57.711587 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-03 00:50:57.711592 | orchestrator | Wednesday 03 September 2025 00:48:21 +0000 (0:00:01.649) 0:03:30.899 *** 2025-09-03 00:50:57.711598 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.711603 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.711609 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.711614 | orchestrator | 2025-09-03 00:50:57.711620 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-03 00:50:57.711625 | orchestrator | Wednesday 03 September 2025 00:48:22 +0000 (0:00:01.201) 0:03:32.100 *** 2025-09-03 00:50:57.711631 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.711636 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.711642 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.711647 | orchestrator | 2025-09-03 00:50:57.711653 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-03 00:50:57.711658 | orchestrator | Wednesday 03 September 2025 00:48:25 +0000 (0:00:02.097) 0:03:34.197 *** 2025-09-03 00:50:57.711663 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.711669 | orchestrator | 2025-09-03 00:50:57.711675 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-03 00:50:57.711680 | orchestrator | Wednesday 03 September 2025 00:48:26 +0000 (0:00:01.325) 0:03:35.523 *** 2025-09-03 00:50:57.711703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-03 00:50:57.711714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-03 00:50:57.711725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-03 00:50:57.711731 | orchestrator | 2025-09-03 00:50:57.711736 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-03 00:50:57.711742 | orchestrator | Wednesday 03 September 2025 00:48:30 +0000 (0:00:03.874) 0:03:39.397 *** 2025-09-03 00:50:57.711748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-03 00:50:57.711754 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.711776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-03 00:50:57.711782 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.711792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-03 00:50:57.711803 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.711809 | orchestrator | 2025-09-03 00:50:57.711814 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-03 00:50:57.711820 | orchestrator | Wednesday 03 September 2025 00:48:30 +0000 (0:00:00.517) 0:03:39.915 *** 2025-09-03 00:50:57.711825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-03 00:50:57.711831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-03 00:50:57.711837 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.711843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-03 00:50:57.711848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-03 00:50:57.711854 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.711860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-03 00:50:57.711865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-03 00:50:57.711871 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.711876 | orchestrator | 2025-09-03 00:50:57.711882 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-03 00:50:57.711887 | orchestrator | Wednesday 03 September 2025 00:48:31 +0000 (0:00:00.730) 0:03:40.645 *** 2025-09-03 00:50:57.711893 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.711898 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.711904 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.711909 | orchestrator | 2025-09-03 00:50:57.711915 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-03 00:50:57.711920 | orchestrator | Wednesday 03 September 2025 00:48:33 +0000 (0:00:02.029) 0:03:42.675 *** 2025-09-03 00:50:57.711926 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.711931 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.711937 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.711942 | orchestrator | 2025-09-03 00:50:57.711948 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-03 00:50:57.711953 | orchestrator | Wednesday 03 September 2025 00:48:35 +0000 (0:00:01.857) 0:03:44.532 *** 2025-09-03 00:50:57.711959 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.711964 | orchestrator | 2025-09-03 00:50:57.711970 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-03 00:50:57.711975 | orchestrator | Wednesday 03 September 2025 00:48:36 +0000 (0:00:01.489) 0:03:46.022 *** 2025-09-03 00:50:57.712014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-03 00:50:57.712029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-03 00:50:57.712036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.712043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.712049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.712070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.712085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-03 00:50:57.712093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.712099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.712106 | orchestrator | 2025-09-03 00:50:57.712112 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-03 00:50:57.712118 | orchestrator | Wednesday 03 September 2025 00:48:41 +0000 (0:00:04.423) 0:03:50.445 *** 2025-09-03 00:50:57.712141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-03 00:50:57.712159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.712166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.712172 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.712179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-03 00:50:57.712186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.712193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.712205 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.712231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-03 00:50:57.712239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.712245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.712252 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.712258 | orchestrator | 2025-09-03 00:50:57.712264 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-03 00:50:57.712271 | orchestrator | Wednesday 03 September 2025 00:48:42 +0000 (0:00:01.336) 0:03:51.781 *** 2025-09-03 00:50:57.712278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-03 00:50:57.712285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-03 00:50:57.712292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-03 00:50:57.712298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-03 00:50:57.712305 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.712311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-03 00:50:57.712323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-03 00:50:57.712329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-03 00:50:57.712335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-03 00:50:57.712358 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.712365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-03 00:50:57.712372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-03 00:50:57.712382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-03 00:50:57.712388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-03 00:50:57.712395 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.712401 | orchestrator | 2025-09-03 00:50:57.712407 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-03 00:50:57.712414 | orchestrator | Wednesday 03 September 2025 00:48:43 +0000 (0:00:00.983) 0:03:52.765 *** 2025-09-03 00:50:57.712420 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.712427 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.712433 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.712439 | orchestrator | 2025-09-03 00:50:57.712445 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-03 00:50:57.712450 | orchestrator | Wednesday 03 September 2025 00:48:45 +0000 (0:00:01.446) 0:03:54.211 *** 2025-09-03 00:50:57.712456 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.712461 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.712467 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.712472 | orchestrator | 2025-09-03 00:50:57.712478 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-03 00:50:57.712483 | orchestrator | Wednesday 03 September 2025 00:48:47 +0000 (0:00:02.171) 0:03:56.383 *** 2025-09-03 00:50:57.712489 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.712494 | orchestrator | 2025-09-03 00:50:57.712500 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-03 00:50:57.712505 | orchestrator | Wednesday 03 September 2025 00:48:48 +0000 (0:00:01.640) 0:03:58.023 *** 2025-09-03 00:50:57.712511 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-03 00:50:57.712517 | orchestrator | 2025-09-03 00:50:57.712522 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-03 00:50:57.712528 | orchestrator | Wednesday 03 September 2025 00:48:49 +0000 (0:00:00.854) 0:03:58.878 *** 2025-09-03 00:50:57.712534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-03 00:50:57.712546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-03 00:50:57.712552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-03 00:50:57.712558 | orchestrator | 2025-09-03 00:50:57.712564 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-03 00:50:57.712570 | orchestrator | Wednesday 03 September 2025 00:48:54 +0000 (0:00:04.300) 0:04:03.179 *** 2025-09-03 00:50:57.712592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-03 00:50:57.712599 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.712608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-03 00:50:57.712614 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.712620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-03 00:50:57.712626 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.712631 | orchestrator | 2025-09-03 00:50:57.712637 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-03 00:50:57.712642 | orchestrator | Wednesday 03 September 2025 00:48:54 +0000 (0:00:00.880) 0:04:04.059 *** 2025-09-03 00:50:57.712648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-03 00:50:57.712658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-03 00:50:57.712663 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.712669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-03 00:50:57.712675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-03 00:50:57.712680 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.712686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-03 00:50:57.712692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-03 00:50:57.712697 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.712703 | orchestrator | 2025-09-03 00:50:57.712708 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-03 00:50:57.712714 | orchestrator | Wednesday 03 September 2025 00:48:56 +0000 (0:00:01.275) 0:04:05.335 *** 2025-09-03 00:50:57.712719 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.712725 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.712730 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.712736 | orchestrator | 2025-09-03 00:50:57.712741 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-03 00:50:57.712747 | orchestrator | Wednesday 03 September 2025 00:48:58 +0000 (0:00:02.248) 0:04:07.584 *** 2025-09-03 00:50:57.712752 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.712758 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.712763 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.712768 | orchestrator | 2025-09-03 00:50:57.712774 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-03 00:50:57.712779 | orchestrator | Wednesday 03 September 2025 00:49:01 +0000 (0:00:02.736) 0:04:10.320 *** 2025-09-03 00:50:57.712800 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-03 00:50:57.712806 | orchestrator | 2025-09-03 00:50:57.712812 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-03 00:50:57.712818 | orchestrator | Wednesday 03 September 2025 00:49:02 +0000 (0:00:01.334) 0:04:11.655 *** 2025-09-03 00:50:57.712826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-03 00:50:57.712832 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.712838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-03 00:50:57.712848 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.712854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-03 00:50:57.712860 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.712865 | orchestrator | 2025-09-03 00:50:57.712871 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-03 00:50:57.712877 | orchestrator | Wednesday 03 September 2025 00:49:03 +0000 (0:00:01.199) 0:04:12.854 *** 2025-09-03 00:50:57.712882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-03 00:50:57.712888 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.712894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-03 00:50:57.712900 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.712905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-03 00:50:57.712911 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.712916 | orchestrator | 2025-09-03 00:50:57.712922 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-03 00:50:57.712927 | orchestrator | Wednesday 03 September 2025 00:49:04 +0000 (0:00:01.257) 0:04:14.112 *** 2025-09-03 00:50:57.712933 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.712939 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.712944 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.712950 | orchestrator | 2025-09-03 00:50:57.712970 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-03 00:50:57.712988 | orchestrator | Wednesday 03 September 2025 00:49:06 +0000 (0:00:01.776) 0:04:15.888 *** 2025-09-03 00:50:57.712994 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:50:57.713000 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:50:57.713006 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:50:57.713017 | orchestrator | 2025-09-03 00:50:57.713023 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-03 00:50:57.713028 | orchestrator | Wednesday 03 September 2025 00:49:09 +0000 (0:00:02.317) 0:04:18.206 *** 2025-09-03 00:50:57.713033 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:50:57.713039 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:50:57.713044 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:50:57.713050 | orchestrator | 2025-09-03 00:50:57.713055 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-03 00:50:57.713066 | orchestrator | Wednesday 03 September 2025 00:49:12 +0000 (0:00:02.968) 0:04:21.174 *** 2025-09-03 00:50:57.713071 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-03 00:50:57.713077 | orchestrator | 2025-09-03 00:50:57.713083 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-03 00:50:57.713088 | orchestrator | Wednesday 03 September 2025 00:49:12 +0000 (0:00:00.823) 0:04:21.997 *** 2025-09-03 00:50:57.713094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-03 00:50:57.713100 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.713105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-03 00:50:57.713111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-03 00:50:57.713117 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.713122 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.713128 | orchestrator | 2025-09-03 00:50:57.713133 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-03 00:50:57.713139 | orchestrator | Wednesday 03 September 2025 00:49:14 +0000 (0:00:01.233) 0:04:23.231 *** 2025-09-03 00:50:57.713144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-03 00:50:57.713150 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.713156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-03 00:50:57.713166 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.713189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-03 00:50:57.713195 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.713201 | orchestrator | 2025-09-03 00:50:57.713206 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-03 00:50:57.713215 | orchestrator | Wednesday 03 September 2025 00:49:15 +0000 (0:00:01.276) 0:04:24.508 *** 2025-09-03 00:50:57.713221 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.713226 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.713232 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.713237 | orchestrator | 2025-09-03 00:50:57.713243 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-03 00:50:57.713248 | orchestrator | Wednesday 03 September 2025 00:49:16 +0000 (0:00:01.522) 0:04:26.030 *** 2025-09-03 00:50:57.713254 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:50:57.713259 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:50:57.713265 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:50:57.713270 | orchestrator | 2025-09-03 00:50:57.713276 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-03 00:50:57.713281 | orchestrator | Wednesday 03 September 2025 00:49:19 +0000 (0:00:02.322) 0:04:28.353 *** 2025-09-03 00:50:57.713286 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:50:57.713292 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:50:57.713297 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:50:57.713303 | orchestrator | 2025-09-03 00:50:57.713308 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-03 00:50:57.713314 | orchestrator | Wednesday 03 September 2025 00:49:22 +0000 (0:00:03.096) 0:04:31.450 *** 2025-09-03 00:50:57.713319 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.713325 | orchestrator | 2025-09-03 00:50:57.713330 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-03 00:50:57.713336 | orchestrator | Wednesday 03 September 2025 00:49:23 +0000 (0:00:01.578) 0:04:33.028 *** 2025-09-03 00:50:57.713342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-03 00:50:57.713348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-03 00:50:57.713358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-03 00:50:57.713380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-03 00:50:57.713390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.713397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-03 00:50:57.713402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-03 00:50:57.713408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-03 00:50:57.713418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-03 00:50:57.713439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.713449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-03 00:50:57.713455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-03 00:50:57.713461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-03 00:50:57.713467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-03 00:50:57.713476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.713482 | orchestrator | 2025-09-03 00:50:57.713488 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-03 00:50:57.713493 | orchestrator | Wednesday 03 September 2025 00:49:27 +0000 (0:00:03.403) 0:04:36.431 *** 2025-09-03 00:50:57.713514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-03 00:50:57.713521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-03 00:50:57.713527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-03 00:50:57.713533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-03 00:50:57.713593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-03 00:50:57.713612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.713618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-03 00:50:57.713643 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.713649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-03 00:50:57.713658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-03 00:50:57.713664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.713670 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.713676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-03 00:50:57.713685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-03 00:50:57.713691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-03 00:50:57.713713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-03 00:50:57.713724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-03 00:50:57.713730 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.713735 | orchestrator | 2025-09-03 00:50:57.713741 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-03 00:50:57.713747 | orchestrator | Wednesday 03 September 2025 00:49:28 +0000 (0:00:00.699) 0:04:37.131 *** 2025-09-03 00:50:57.713752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-03 00:50:57.713758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-03 00:50:57.713768 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.713774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-03 00:50:57.713780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-03 00:50:57.713786 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.713791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-03 00:50:57.713797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-03 00:50:57.713802 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.713808 | orchestrator | 2025-09-03 00:50:57.713813 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-03 00:50:57.713819 | orchestrator | Wednesday 03 September 2025 00:49:29 +0000 (0:00:01.424) 0:04:38.556 *** 2025-09-03 00:50:57.713825 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.713830 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.713836 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.713841 | orchestrator | 2025-09-03 00:50:57.713847 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-03 00:50:57.713852 | orchestrator | Wednesday 03 September 2025 00:49:30 +0000 (0:00:01.384) 0:04:39.940 *** 2025-09-03 00:50:57.713858 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.713863 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.713869 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.713874 | orchestrator | 2025-09-03 00:50:57.713880 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-03 00:50:57.713885 | orchestrator | Wednesday 03 September 2025 00:49:32 +0000 (0:00:02.099) 0:04:42.040 *** 2025-09-03 00:50:57.713891 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.713896 | orchestrator | 2025-09-03 00:50:57.713902 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-03 00:50:57.713907 | orchestrator | Wednesday 03 September 2025 00:49:34 +0000 (0:00:01.360) 0:04:43.400 *** 2025-09-03 00:50:57.713929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-03 00:50:57.713940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-03 00:50:57.713950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-03 00:50:57.713956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-03 00:50:57.713991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-03 00:50:57.714002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-03 00:50:57.714033 | orchestrator | 2025-09-03 00:50:57.714039 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-03 00:50:57.714044 | orchestrator | Wednesday 03 September 2025 00:49:39 +0000 (0:00:04.927) 0:04:48.328 *** 2025-09-03 00:50:57.714050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-03 00:50:57.714056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-03 00:50:57.714063 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.714070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-03 00:50:57.714104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-03 00:50:57.714117 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.714123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-03 00:50:57.714129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-03 00:50:57.714135 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.714140 | orchestrator | 2025-09-03 00:50:57.714146 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-03 00:50:57.714151 | orchestrator | Wednesday 03 September 2025 00:49:39 +0000 (0:00:00.542) 0:04:48.871 *** 2025-09-03 00:50:57.714157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-03 00:50:57.714163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-03 00:50:57.714169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-03 00:50:57.714174 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.714180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-03 00:50:57.714201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-03 00:50:57.714212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-03 00:50:57.714218 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.714227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-03 00:50:57.714232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-03 00:50:57.714238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-03 00:50:57.714244 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.714249 | orchestrator | 2025-09-03 00:50:57.714255 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-03 00:50:57.714260 | orchestrator | Wednesday 03 September 2025 00:49:40 +0000 (0:00:00.797) 0:04:49.668 *** 2025-09-03 00:50:57.714266 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.714272 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.714277 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.714283 | orchestrator | 2025-09-03 00:50:57.714288 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-03 00:50:57.714294 | orchestrator | Wednesday 03 September 2025 00:49:41 +0000 (0:00:00.615) 0:04:50.284 *** 2025-09-03 00:50:57.714299 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.714305 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.714310 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.714316 | orchestrator | 2025-09-03 00:50:57.714321 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-03 00:50:57.714327 | orchestrator | Wednesday 03 September 2025 00:49:42 +0000 (0:00:01.139) 0:04:51.423 *** 2025-09-03 00:50:57.714332 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.714338 | orchestrator | 2025-09-03 00:50:57.714343 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-03 00:50:57.714349 | orchestrator | Wednesday 03 September 2025 00:49:43 +0000 (0:00:01.403) 0:04:52.827 *** 2025-09-03 00:50:57.714355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-03 00:50:57.714361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-03 00:50:57.714372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:50:57.714394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:50:57.714404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-03 00:50:57.714411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-03 00:50:57.714417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-03 00:50:57.714423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:50:57.714429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:50:57.714438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-03 00:50:57.714461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-03 00:50:57.714471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-03 00:50:57.714477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:50:57.714483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:50:57.714489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-03 00:50:57.714495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-03 00:50:57.714508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-03 00:50:57.714517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:50:57.714523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:50:57.714529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-03 00:50:57.714535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-03 00:50:57.714541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-03 00:50:57.714555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:50:57.714561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:50:57.714570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-03 00:50:57.714576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-03 00:50:57.714582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-03 00:50:57.714592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:50:57.714598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:50:57.714606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-03 00:50:57.714612 | orchestrator | 2025-09-03 00:50:57.714618 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-03 00:50:57.714624 | orchestrator | Wednesday 03 September 2025 00:49:48 +0000 (0:00:04.520) 0:04:57.347 *** 2025-09-03 00:50:57.714632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-03 00:50:57.714638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-03 00:50:57.714644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:50:57.714650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:50:57.714660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-03 00:50:57.714669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-03 00:50:57.714678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-03 00:50:57.714684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:50:57.714690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:50:57.714696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-03 00:50:57.714705 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.714711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-03 00:50:57.714717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-03 00:50:57.714725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:50:57.714735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:50:57.714741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-03 00:50:57.714747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-03 00:50:57.714756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-03 00:50:57.714762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-03 00:50:57.714771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-03 00:50:57.714782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:50:57.714788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:50:57.714794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:50:57.714800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:50:57.714810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-03 00:50:57.714815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-03 00:50:57.714821 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.714830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-03 00:50:57.714839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-03 00:50:57.714845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:50:57.714855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:50:57.714861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-03 00:50:57.714867 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.714872 | orchestrator | 2025-09-03 00:50:57.714878 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-03 00:50:57.714884 | orchestrator | Wednesday 03 September 2025 00:49:49 +0000 (0:00:01.132) 0:04:58.480 *** 2025-09-03 00:50:57.714889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-03 00:50:57.714895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-03 00:50:57.714901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-03 00:50:57.714907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-03 00:50:57.714913 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.714919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-03 00:50:57.714927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-03 00:50:57.714933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-03 00:50:57.714942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-03 00:50:57.714948 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.714954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-03 00:50:57.714959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-03 00:50:57.714972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-03 00:50:57.715015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-03 00:50:57.715022 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.715028 | orchestrator | 2025-09-03 00:50:57.715033 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-03 00:50:57.715039 | orchestrator | Wednesday 03 September 2025 00:49:50 +0000 (0:00:00.926) 0:04:59.406 *** 2025-09-03 00:50:57.715045 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.715053 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.715059 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.715064 | orchestrator | 2025-09-03 00:50:57.715070 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-03 00:50:57.715075 | orchestrator | Wednesday 03 September 2025 00:49:50 +0000 (0:00:00.453) 0:04:59.860 *** 2025-09-03 00:50:57.715081 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.715086 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.715091 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.715096 | orchestrator | 2025-09-03 00:50:57.715101 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-03 00:50:57.715106 | orchestrator | Wednesday 03 September 2025 00:49:52 +0000 (0:00:01.350) 0:05:01.210 *** 2025-09-03 00:50:57.715111 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.715116 | orchestrator | 2025-09-03 00:50:57.715120 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-03 00:50:57.715125 | orchestrator | Wednesday 03 September 2025 00:49:53 +0000 (0:00:01.741) 0:05:02.952 *** 2025-09-03 00:50:57.715130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-03 00:50:57.715143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-03 00:50:57.715153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-03 00:50:57.715159 | orchestrator | 2025-09-03 00:50:57.715164 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-03 00:50:57.715169 | orchestrator | Wednesday 03 September 2025 00:49:56 +0000 (0:00:02.442) 0:05:05.394 *** 2025-09-03 00:50:57.715174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-03 00:50:57.715179 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.715185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-03 00:50:57.715190 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.715201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-03 00:50:57.715210 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.715215 | orchestrator | 2025-09-03 00:50:57.715220 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-03 00:50:57.715225 | orchestrator | Wednesday 03 September 2025 00:49:56 +0000 (0:00:00.380) 0:05:05.774 *** 2025-09-03 00:50:57.715230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-03 00:50:57.715235 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.715240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-03 00:50:57.715245 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.715250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-03 00:50:57.715255 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.715259 | orchestrator | 2025-09-03 00:50:57.715264 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-03 00:50:57.715269 | orchestrator | Wednesday 03 September 2025 00:49:57 +0000 (0:00:00.964) 0:05:06.739 *** 2025-09-03 00:50:57.715274 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.715279 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.715284 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.715289 | orchestrator | 2025-09-03 00:50:57.715294 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-03 00:50:57.715298 | orchestrator | Wednesday 03 September 2025 00:49:58 +0000 (0:00:00.404) 0:05:07.143 *** 2025-09-03 00:50:57.715303 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.715308 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.715313 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.715318 | orchestrator | 2025-09-03 00:50:57.715323 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-03 00:50:57.715328 | orchestrator | Wednesday 03 September 2025 00:49:59 +0000 (0:00:01.297) 0:05:08.441 *** 2025-09-03 00:50:57.715332 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:50:57.715337 | orchestrator | 2025-09-03 00:50:57.715342 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-03 00:50:57.715347 | orchestrator | Wednesday 03 September 2025 00:50:01 +0000 (0:00:01.744) 0:05:10.186 *** 2025-09-03 00:50:57.715352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-03 00:50:57.715364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-03 00:50:57.715372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-03 00:50:57.715378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-03 00:50:57.715384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-03 00:50:57.715389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-03 00:50:57.715399 | orchestrator | 2025-09-03 00:50:57.715406 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-03 00:50:57.715411 | orchestrator | Wednesday 03 September 2025 00:50:07 +0000 (0:00:06.197) 0:05:16.384 *** 2025-09-03 00:50:57.715420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-03 00:50:57.715425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-03 00:50:57.715430 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.715435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-03 00:50:57.715441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-03 00:50:57.715450 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.715462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-03 00:50:57.715468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-03 00:50:57.715473 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.715478 | orchestrator | 2025-09-03 00:50:57.715483 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-03 00:50:57.715488 | orchestrator | Wednesday 03 September 2025 00:50:07 +0000 (0:00:00.656) 0:05:17.041 *** 2025-09-03 00:50:57.715493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-03 00:50:57.715498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-03 00:50:57.715503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-03 00:50:57.715508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-03 00:50:57.715517 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.715522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-03 00:50:57.715527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-03 00:50:57.715532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-03 00:50:57.715537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-03 00:50:57.715542 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.715547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-03 00:50:57.715554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-03 00:50:57.715559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-03 00:50:57.715564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-03 00:50:57.715569 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.715574 | orchestrator | 2025-09-03 00:50:57.715582 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-03 00:50:57.715588 | orchestrator | Wednesday 03 September 2025 00:50:09 +0000 (0:00:01.575) 0:05:18.616 *** 2025-09-03 00:50:57.715592 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.715597 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.715602 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.715607 | orchestrator | 2025-09-03 00:50:57.715612 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-03 00:50:57.715617 | orchestrator | Wednesday 03 September 2025 00:50:10 +0000 (0:00:01.291) 0:05:19.907 *** 2025-09-03 00:50:57.715622 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.715627 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.715632 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.715637 | orchestrator | 2025-09-03 00:50:57.715641 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-03 00:50:57.715646 | orchestrator | Wednesday 03 September 2025 00:50:12 +0000 (0:00:02.087) 0:05:21.995 *** 2025-09-03 00:50:57.715651 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.715656 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.715661 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.715666 | orchestrator | 2025-09-03 00:50:57.715671 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-03 00:50:57.715676 | orchestrator | Wednesday 03 September 2025 00:50:13 +0000 (0:00:00.314) 0:05:22.309 *** 2025-09-03 00:50:57.715681 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.715685 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.715690 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.715699 | orchestrator | 2025-09-03 00:50:57.715704 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-03 00:50:57.715709 | orchestrator | Wednesday 03 September 2025 00:50:13 +0000 (0:00:00.291) 0:05:22.601 *** 2025-09-03 00:50:57.715714 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.715719 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.715723 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.715728 | orchestrator | 2025-09-03 00:50:57.715733 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-03 00:50:57.715738 | orchestrator | Wednesday 03 September 2025 00:50:14 +0000 (0:00:00.575) 0:05:23.177 *** 2025-09-03 00:50:57.715743 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.715748 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.715752 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.715757 | orchestrator | 2025-09-03 00:50:57.715762 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-03 00:50:57.715767 | orchestrator | Wednesday 03 September 2025 00:50:14 +0000 (0:00:00.298) 0:05:23.476 *** 2025-09-03 00:50:57.715772 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.715777 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.715782 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.715786 | orchestrator | 2025-09-03 00:50:57.715791 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-03 00:50:57.715796 | orchestrator | Wednesday 03 September 2025 00:50:14 +0000 (0:00:00.276) 0:05:23.752 *** 2025-09-03 00:50:57.715801 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.715806 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.715811 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.715816 | orchestrator | 2025-09-03 00:50:57.715821 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-03 00:50:57.715826 | orchestrator | Wednesday 03 September 2025 00:50:15 +0000 (0:00:00.801) 0:05:24.553 *** 2025-09-03 00:50:57.715831 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:50:57.715836 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:50:57.715840 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:50:57.715845 | orchestrator | 2025-09-03 00:50:57.715850 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-03 00:50:57.715855 | orchestrator | Wednesday 03 September 2025 00:50:16 +0000 (0:00:00.691) 0:05:25.245 *** 2025-09-03 00:50:57.715860 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:50:57.715865 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:50:57.715870 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:50:57.715875 | orchestrator | 2025-09-03 00:50:57.715880 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-03 00:50:57.715884 | orchestrator | Wednesday 03 September 2025 00:50:16 +0000 (0:00:00.336) 0:05:25.581 *** 2025-09-03 00:50:57.715889 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:50:57.715894 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:50:57.715899 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:50:57.715904 | orchestrator | 2025-09-03 00:50:57.715909 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-03 00:50:57.715914 | orchestrator | Wednesday 03 September 2025 00:50:17 +0000 (0:00:00.912) 0:05:26.493 *** 2025-09-03 00:50:57.715918 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:50:57.715923 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:50:57.715928 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:50:57.715933 | orchestrator | 2025-09-03 00:50:57.715938 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-03 00:50:57.715943 | orchestrator | Wednesday 03 September 2025 00:50:18 +0000 (0:00:01.236) 0:05:27.730 *** 2025-09-03 00:50:57.715948 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:50:57.715953 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:50:57.715960 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:50:57.715965 | orchestrator | 2025-09-03 00:50:57.715970 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-03 00:50:57.715990 | orchestrator | Wednesday 03 September 2025 00:50:19 +0000 (0:00:00.909) 0:05:28.639 *** 2025-09-03 00:50:57.715995 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.716000 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.716005 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.716010 | orchestrator | 2025-09-03 00:50:57.716015 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-03 00:50:57.716020 | orchestrator | Wednesday 03 September 2025 00:50:24 +0000 (0:00:04.751) 0:05:33.391 *** 2025-09-03 00:50:57.716024 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:50:57.716029 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:50:57.716034 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:50:57.716039 | orchestrator | 2025-09-03 00:50:57.716044 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-03 00:50:57.716052 | orchestrator | Wednesday 03 September 2025 00:50:27 +0000 (0:00:03.717) 0:05:37.108 *** 2025-09-03 00:50:57.716057 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.716061 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.716066 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.716071 | orchestrator | 2025-09-03 00:50:57.716076 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-03 00:50:57.716081 | orchestrator | Wednesday 03 September 2025 00:50:36 +0000 (0:00:08.415) 0:05:45.524 *** 2025-09-03 00:50:57.716086 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:50:57.716091 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:50:57.716095 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:50:57.716100 | orchestrator | 2025-09-03 00:50:57.716105 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-03 00:50:57.716110 | orchestrator | Wednesday 03 September 2025 00:50:40 +0000 (0:00:04.116) 0:05:49.640 *** 2025-09-03 00:50:57.716115 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:50:57.716120 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:50:57.716125 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:50:57.716129 | orchestrator | 2025-09-03 00:50:57.716134 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-03 00:50:57.716139 | orchestrator | Wednesday 03 September 2025 00:50:49 +0000 (0:00:09.260) 0:05:58.900 *** 2025-09-03 00:50:57.716144 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.716149 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.716154 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.716159 | orchestrator | 2025-09-03 00:50:57.716163 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-03 00:50:57.716168 | orchestrator | Wednesday 03 September 2025 00:50:50 +0000 (0:00:00.326) 0:05:59.226 *** 2025-09-03 00:50:57.716173 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.716178 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.716183 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.716188 | orchestrator | 2025-09-03 00:50:57.716193 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-03 00:50:57.716198 | orchestrator | Wednesday 03 September 2025 00:50:50 +0000 (0:00:00.339) 0:05:59.566 *** 2025-09-03 00:50:57.716202 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.716207 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.716212 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.716217 | orchestrator | 2025-09-03 00:50:57.716222 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-03 00:50:57.716227 | orchestrator | Wednesday 03 September 2025 00:50:51 +0000 (0:00:00.674) 0:06:00.240 *** 2025-09-03 00:50:57.716232 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.716236 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.716241 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.716246 | orchestrator | 2025-09-03 00:50:57.716251 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-03 00:50:57.716256 | orchestrator | Wednesday 03 September 2025 00:50:51 +0000 (0:00:00.336) 0:06:00.577 *** 2025-09-03 00:50:57.716265 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.716270 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.716275 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.716280 | orchestrator | 2025-09-03 00:50:57.716285 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-03 00:50:57.716289 | orchestrator | Wednesday 03 September 2025 00:50:51 +0000 (0:00:00.329) 0:06:00.906 *** 2025-09-03 00:50:57.716294 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:50:57.716299 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:50:57.716304 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:50:57.716309 | orchestrator | 2025-09-03 00:50:57.716314 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-03 00:50:57.716319 | orchestrator | Wednesday 03 September 2025 00:50:52 +0000 (0:00:00.352) 0:06:01.259 *** 2025-09-03 00:50:57.716324 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:50:57.716328 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:50:57.716333 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:50:57.716338 | orchestrator | 2025-09-03 00:50:57.716343 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-03 00:50:57.716348 | orchestrator | Wednesday 03 September 2025 00:50:53 +0000 (0:00:01.334) 0:06:02.593 *** 2025-09-03 00:50:57.716353 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:50:57.716358 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:50:57.716363 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:50:57.716367 | orchestrator | 2025-09-03 00:50:57.716372 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:50:57.716377 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-03 00:50:57.716382 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-03 00:50:57.716387 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-03 00:50:57.716392 | orchestrator | 2025-09-03 00:50:57.716397 | orchestrator | 2025-09-03 00:50:57.716404 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:50:57.716409 | orchestrator | Wednesday 03 September 2025 00:50:54 +0000 (0:00:00.828) 0:06:03.421 *** 2025-09-03 00:50:57.716414 | orchestrator | =============================================================================== 2025-09-03 00:50:57.716419 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.26s 2025-09-03 00:50:57.716424 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.42s 2025-09-03 00:50:57.716429 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.20s 2025-09-03 00:50:57.716434 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 4.93s 2025-09-03 00:50:57.716439 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.81s 2025-09-03 00:50:57.716446 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.75s 2025-09-03 00:50:57.716451 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.52s 2025-09-03 00:50:57.716456 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.47s 2025-09-03 00:50:57.716461 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.47s 2025-09-03 00:50:57.716466 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.42s 2025-09-03 00:50:57.716471 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.30s 2025-09-03 00:50:57.716476 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.20s 2025-09-03 00:50:57.716481 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.12s 2025-09-03 00:50:57.716489 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.09s 2025-09-03 00:50:57.716494 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.08s 2025-09-03 00:50:57.716499 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.02s 2025-09-03 00:50:57.716504 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.87s 2025-09-03 00:50:57.716509 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 3.86s 2025-09-03 00:50:57.716514 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.84s 2025-09-03 00:50:57.716519 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 3.74s 2025-09-03 00:50:57.716524 | orchestrator | 2025-09-03 00:50:57 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:50:57.716529 | orchestrator | 2025-09-03 00:50:57 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:50:57.716534 | orchestrator | 2025-09-03 00:50:57 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:50:57.716539 | orchestrator | 2025-09-03 00:50:57 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:51:00.745853 | orchestrator | 2025-09-03 00:51:00 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:51:00.747804 | orchestrator | 2025-09-03 00:51:00 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:51:00.747835 | orchestrator | 2025-09-03 00:51:00 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:51:00.747847 | orchestrator | 2025-09-03 00:51:00 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:51:03.782928 | orchestrator | 2025-09-03 00:51:03 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:51:03.783848 | orchestrator | 2025-09-03 00:51:03 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:51:03.786331 | orchestrator | 2025-09-03 00:51:03 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:51:03.786374 | orchestrator | 2025-09-03 00:51:03 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:51:06.827588 | orchestrator | 2025-09-03 00:51:06 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:51:06.828702 | orchestrator | 2025-09-03 00:51:06 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:51:06.830394 | orchestrator | 2025-09-03 00:51:06 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:51:06.830492 | orchestrator | 2025-09-03 00:51:06 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:51:09.869891 | orchestrator | 2025-09-03 00:51:09 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:51:09.870152 | orchestrator | 2025-09-03 00:51:09 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:51:09.870172 | orchestrator | 2025-09-03 00:51:09 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:51:09.870185 | orchestrator | 2025-09-03 00:51:09 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:51:12.904901 | orchestrator | 2025-09-03 00:51:12 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:51:12.907383 | orchestrator | 2025-09-03 00:51:12 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:51:12.907416 | orchestrator | 2025-09-03 00:51:12 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:51:12.907427 | orchestrator | 2025-09-03 00:51:12 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:51:15.949581 | orchestrator | 2025-09-03 00:51:15 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:51:15.949836 | orchestrator | 2025-09-03 00:51:15 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:51:15.950795 | orchestrator | 2025-09-03 00:51:15 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:51:15.950823 | orchestrator | 2025-09-03 00:51:15 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:51:18.995065 | orchestrator | 2025-09-03 00:51:18 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:51:18.995595 | orchestrator | 2025-09-03 00:51:18 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:51:18.997698 | orchestrator | 2025-09-03 00:51:18 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:51:18.997846 | orchestrator | 2025-09-03 00:51:18 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:51:22.078524 | orchestrator | 2025-09-03 00:51:22 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:51:22.081564 | orchestrator | 2025-09-03 00:51:22 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:51:22.084567 | orchestrator | 2025-09-03 00:51:22 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:51:22.084973 | orchestrator | 2025-09-03 00:51:22 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:51:25.126782 | orchestrator | 2025-09-03 00:51:25 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:51:25.129662 | orchestrator | 2025-09-03 00:51:25 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:51:25.131675 | orchestrator | 2025-09-03 00:51:25 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:51:25.132872 | orchestrator | 2025-09-03 00:51:25 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:51:28.169248 | orchestrator | 2025-09-03 00:51:28 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:51:28.169374 | orchestrator | 2025-09-03 00:51:28 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:51:28.170082 | orchestrator | 2025-09-03 00:51:28 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:51:28.170109 | orchestrator | 2025-09-03 00:51:28 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:51:31.209601 | orchestrator | 2025-09-03 00:51:31 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:51:31.211078 | orchestrator | 2025-09-03 00:51:31 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:51:31.213838 | orchestrator | 2025-09-03 00:51:31 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:51:31.213862 | orchestrator | 2025-09-03 00:51:31 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:51:34.251828 | orchestrator | 2025-09-03 00:51:34 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:51:34.252413 | orchestrator | 2025-09-03 00:51:34 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:51:34.254097 | orchestrator | 2025-09-03 00:51:34 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:51:34.254130 | orchestrator | 2025-09-03 00:51:34 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:51:37.292866 | orchestrator | 2025-09-03 00:51:37 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:51:37.295640 | orchestrator | 2025-09-03 00:51:37 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:51:37.297748 | orchestrator | 2025-09-03 00:51:37 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:51:37.298307 | orchestrator | 2025-09-03 00:51:37 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:51:40.340187 | orchestrator | 2025-09-03 00:51:40 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:51:40.340868 | orchestrator | 2025-09-03 00:51:40 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:51:40.342167 | orchestrator | 2025-09-03 00:51:40 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:51:40.342424 | orchestrator | 2025-09-03 00:51:40 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:51:43.389225 | orchestrator | 2025-09-03 00:51:43 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:51:43.389358 | orchestrator | 2025-09-03 00:51:43 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:51:43.390158 | orchestrator | 2025-09-03 00:51:43 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:51:43.390184 | orchestrator | 2025-09-03 00:51:43 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:51:46.433829 | orchestrator | 2025-09-03 00:51:46 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:51:46.434642 | orchestrator | 2025-09-03 00:51:46 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:51:46.436372 | orchestrator | 2025-09-03 00:51:46 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:51:46.436396 | orchestrator | 2025-09-03 00:51:46 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:51:49.479954 | orchestrator | 2025-09-03 00:51:49 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:51:49.480385 | orchestrator | 2025-09-03 00:51:49 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:51:49.483191 | orchestrator | 2025-09-03 00:51:49 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:51:49.483225 | orchestrator | 2025-09-03 00:51:49 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:51:52.529950 | orchestrator | 2025-09-03 00:51:52 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:51:52.530652 | orchestrator | 2025-09-03 00:51:52 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:51:52.532602 | orchestrator | 2025-09-03 00:51:52 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:51:52.532626 | orchestrator | 2025-09-03 00:51:52 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:51:55.578620 | orchestrator | 2025-09-03 00:51:55 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:51:55.580526 | orchestrator | 2025-09-03 00:51:55 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:51:55.582566 | orchestrator | 2025-09-03 00:51:55 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:51:55.582759 | orchestrator | 2025-09-03 00:51:55 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:51:58.624798 | orchestrator | 2025-09-03 00:51:58 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:51:58.625700 | orchestrator | 2025-09-03 00:51:58 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:51:58.626676 | orchestrator | 2025-09-03 00:51:58 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:51:58.626958 | orchestrator | 2025-09-03 00:51:58 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:52:01.673593 | orchestrator | 2025-09-03 00:52:01 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:52:01.676258 | orchestrator | 2025-09-03 00:52:01 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:52:01.678526 | orchestrator | 2025-09-03 00:52:01 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:52:01.678553 | orchestrator | 2025-09-03 00:52:01 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:52:04.742674 | orchestrator | 2025-09-03 00:52:04 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:52:04.743091 | orchestrator | 2025-09-03 00:52:04 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:52:04.744490 | orchestrator | 2025-09-03 00:52:04 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:52:04.744522 | orchestrator | 2025-09-03 00:52:04 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:52:07.785025 | orchestrator | 2025-09-03 00:52:07 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:52:07.786485 | orchestrator | 2025-09-03 00:52:07 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:52:07.788113 | orchestrator | 2025-09-03 00:52:07 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:52:07.788258 | orchestrator | 2025-09-03 00:52:07 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:52:10.833904 | orchestrator | 2025-09-03 00:52:10 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:52:10.834240 | orchestrator | 2025-09-03 00:52:10 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:52:10.834270 | orchestrator | 2025-09-03 00:52:10 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:52:10.834282 | orchestrator | 2025-09-03 00:52:10 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:52:13.890180 | orchestrator | 2025-09-03 00:52:13 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:52:13.891024 | orchestrator | 2025-09-03 00:52:13 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:52:13.892740 | orchestrator | 2025-09-03 00:52:13 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:52:13.892766 | orchestrator | 2025-09-03 00:52:13 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:52:16.963813 | orchestrator | 2025-09-03 00:52:16 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:52:16.964668 | orchestrator | 2025-09-03 00:52:16 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:52:16.967040 | orchestrator | 2025-09-03 00:52:16 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:52:16.967065 | orchestrator | 2025-09-03 00:52:16 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:52:20.023693 | orchestrator | 2025-09-03 00:52:20 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:52:20.023797 | orchestrator | 2025-09-03 00:52:20 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:52:20.024340 | orchestrator | 2025-09-03 00:52:20 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:52:20.026599 | orchestrator | 2025-09-03 00:52:20 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:52:23.078638 | orchestrator | 2025-09-03 00:52:23 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:52:23.081315 | orchestrator | 2025-09-03 00:52:23 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:52:23.083173 | orchestrator | 2025-09-03 00:52:23 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:52:23.083294 | orchestrator | 2025-09-03 00:52:23 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:52:26.129072 | orchestrator | 2025-09-03 00:52:26 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:52:26.129202 | orchestrator | 2025-09-03 00:52:26 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:52:26.129221 | orchestrator | 2025-09-03 00:52:26 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:52:26.129234 | orchestrator | 2025-09-03 00:52:26 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:52:29.188498 | orchestrator | 2025-09-03 00:52:29 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:52:29.190938 | orchestrator | 2025-09-03 00:52:29 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:52:29.193323 | orchestrator | 2025-09-03 00:52:29 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:52:29.193349 | orchestrator | 2025-09-03 00:52:29 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:52:32.236079 | orchestrator | 2025-09-03 00:52:32 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:52:32.237626 | orchestrator | 2025-09-03 00:52:32 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:52:32.240139 | orchestrator | 2025-09-03 00:52:32 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:52:32.240179 | orchestrator | 2025-09-03 00:52:32 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:52:35.294963 | orchestrator | 2025-09-03 00:52:35 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:52:35.298689 | orchestrator | 2025-09-03 00:52:35 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:52:35.301565 | orchestrator | 2025-09-03 00:52:35 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:52:35.301589 | orchestrator | 2025-09-03 00:52:35 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:52:38.351835 | orchestrator | 2025-09-03 00:52:38 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:52:38.353156 | orchestrator | 2025-09-03 00:52:38 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:52:38.356324 | orchestrator | 2025-09-03 00:52:38 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:52:38.356434 | orchestrator | 2025-09-03 00:52:38 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:52:41.400067 | orchestrator | 2025-09-03 00:52:41 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:52:41.402222 | orchestrator | 2025-09-03 00:52:41 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:52:41.404049 | orchestrator | 2025-09-03 00:52:41 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:52:41.404078 | orchestrator | 2025-09-03 00:52:41 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:52:44.457539 | orchestrator | 2025-09-03 00:52:44 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:52:44.459205 | orchestrator | 2025-09-03 00:52:44 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:52:44.461350 | orchestrator | 2025-09-03 00:52:44 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:52:44.461444 | orchestrator | 2025-09-03 00:52:44 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:52:47.510554 | orchestrator | 2025-09-03 00:52:47 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:52:47.511467 | orchestrator | 2025-09-03 00:52:47 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:52:47.513037 | orchestrator | 2025-09-03 00:52:47 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:52:47.513278 | orchestrator | 2025-09-03 00:52:47 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:52:50.557819 | orchestrator | 2025-09-03 00:52:50 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:52:50.558884 | orchestrator | 2025-09-03 00:52:50 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:52:50.561016 | orchestrator | 2025-09-03 00:52:50 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:52:50.561049 | orchestrator | 2025-09-03 00:52:50 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:52:53.606824 | orchestrator | 2025-09-03 00:52:53 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:52:53.609172 | orchestrator | 2025-09-03 00:52:53 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:52:53.609206 | orchestrator | 2025-09-03 00:52:53 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:52:53.609218 | orchestrator | 2025-09-03 00:52:53 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:52:56.661828 | orchestrator | 2025-09-03 00:52:56 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:52:56.663050 | orchestrator | 2025-09-03 00:52:56 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:52:56.664482 | orchestrator | 2025-09-03 00:52:56 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:52:56.664845 | orchestrator | 2025-09-03 00:52:56 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:52:59.706504 | orchestrator | 2025-09-03 00:52:59 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:52:59.708846 | orchestrator | 2025-09-03 00:52:59 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:52:59.711142 | orchestrator | 2025-09-03 00:52:59 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:52:59.711956 | orchestrator | 2025-09-03 00:52:59 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:53:02.758105 | orchestrator | 2025-09-03 00:53:02 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:53:02.759321 | orchestrator | 2025-09-03 00:53:02 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:53:02.761330 | orchestrator | 2025-09-03 00:53:02 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state STARTED 2025-09-03 00:53:02.761637 | orchestrator | 2025-09-03 00:53:02 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:53:05.812361 | orchestrator | 2025-09-03 00:53:05 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:53:05.813437 | orchestrator | 2025-09-03 00:53:05 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:53:05.820413 | orchestrator | 2025-09-03 00:53:05 | INFO  | Task 0ec25ec1-7666-4bd2-8235-0d101f871be7 is in state SUCCESS 2025-09-03 00:53:05.824036 | orchestrator | 2025-09-03 00:53:05.824070 | orchestrator | 2025-09-03 00:53:05.824083 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-03 00:53:05.824096 | orchestrator | 2025-09-03 00:53:05.824107 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-03 00:53:05.824158 | orchestrator | Wednesday 03 September 2025 00:42:24 +0000 (0:00:00.755) 0:00:00.755 *** 2025-09-03 00:53:05.824172 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:53:05.824184 | orchestrator | 2025-09-03 00:53:05.824195 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-03 00:53:05.824207 | orchestrator | Wednesday 03 September 2025 00:42:25 +0000 (0:00:01.124) 0:00:01.880 *** 2025-09-03 00:53:05.824219 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.824234 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.824245 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.824257 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.824268 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.824280 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.824295 | orchestrator | 2025-09-03 00:53:05.824313 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-03 00:53:05.824332 | orchestrator | Wednesday 03 September 2025 00:42:26 +0000 (0:00:01.457) 0:00:03.337 *** 2025-09-03 00:53:05.824350 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.824370 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.824390 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.824408 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.824428 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.824440 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.824451 | orchestrator | 2025-09-03 00:53:05.824666 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-03 00:53:05.824681 | orchestrator | Wednesday 03 September 2025 00:42:27 +0000 (0:00:00.819) 0:00:04.156 *** 2025-09-03 00:53:05.824693 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.824706 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.824719 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.824733 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.824747 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.824761 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.824774 | orchestrator | 2025-09-03 00:53:05.824787 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-03 00:53:05.824801 | orchestrator | Wednesday 03 September 2025 00:42:28 +0000 (0:00:00.945) 0:00:05.102 *** 2025-09-03 00:53:05.824814 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.824827 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.824840 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.824854 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.824866 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.824879 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.824892 | orchestrator | 2025-09-03 00:53:05.824905 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-03 00:53:05.824919 | orchestrator | Wednesday 03 September 2025 00:42:29 +0000 (0:00:00.807) 0:00:05.910 *** 2025-09-03 00:53:05.824933 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.824946 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.825032 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.825047 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.825058 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.825069 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.825080 | orchestrator | 2025-09-03 00:53:05.825091 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-03 00:53:05.825102 | orchestrator | Wednesday 03 September 2025 00:42:30 +0000 (0:00:00.649) 0:00:06.559 *** 2025-09-03 00:53:05.825113 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.825124 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.825134 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.825145 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.825156 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.825167 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.825178 | orchestrator | 2025-09-03 00:53:05.825190 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-03 00:53:05.825201 | orchestrator | Wednesday 03 September 2025 00:42:30 +0000 (0:00:00.901) 0:00:07.461 *** 2025-09-03 00:53:05.825212 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.825224 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.825235 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.825246 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.825257 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.825268 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.825279 | orchestrator | 2025-09-03 00:53:05.825290 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-03 00:53:05.825301 | orchestrator | Wednesday 03 September 2025 00:42:31 +0000 (0:00:00.804) 0:00:08.265 *** 2025-09-03 00:53:05.825312 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.825323 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.825334 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.825345 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.825356 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.825367 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.825378 | orchestrator | 2025-09-03 00:53:05.825389 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-03 00:53:05.825400 | orchestrator | Wednesday 03 September 2025 00:42:32 +0000 (0:00:00.858) 0:00:09.124 *** 2025-09-03 00:53:05.825411 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-03 00:53:05.825422 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-03 00:53:05.825433 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-03 00:53:05.825444 | orchestrator | 2025-09-03 00:53:05.825455 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-03 00:53:05.825476 | orchestrator | Wednesday 03 September 2025 00:42:33 +0000 (0:00:00.558) 0:00:09.682 *** 2025-09-03 00:53:05.825487 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.825498 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.825509 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.825520 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.825531 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.825541 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.825552 | orchestrator | 2025-09-03 00:53:05.825576 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-03 00:53:05.825587 | orchestrator | Wednesday 03 September 2025 00:42:34 +0000 (0:00:01.332) 0:00:11.014 *** 2025-09-03 00:53:05.825598 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-03 00:53:05.825609 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-03 00:53:05.825799 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-03 00:53:05.825811 | orchestrator | 2025-09-03 00:53:05.825822 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-03 00:53:05.825843 | orchestrator | Wednesday 03 September 2025 00:42:37 +0000 (0:00:02.914) 0:00:13.929 *** 2025-09-03 00:53:05.825855 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-03 00:53:05.825866 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-03 00:53:05.825877 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-03 00:53:05.825888 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.825900 | orchestrator | 2025-09-03 00:53:05.825911 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-03 00:53:05.825922 | orchestrator | Wednesday 03 September 2025 00:42:37 +0000 (0:00:00.385) 0:00:14.315 *** 2025-09-03 00:53:05.825935 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.825951 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.825978 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.825991 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.826002 | orchestrator | 2025-09-03 00:53:05.826013 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-03 00:53:05.826074 | orchestrator | Wednesday 03 September 2025 00:42:38 +0000 (0:00:00.811) 0:00:15.126 *** 2025-09-03 00:53:05.826088 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.826102 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.826114 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.826125 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.826137 | orchestrator | 2025-09-03 00:53:05.826148 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-03 00:53:05.826159 | orchestrator | Wednesday 03 September 2025 00:42:38 +0000 (0:00:00.308) 0:00:15.435 *** 2025-09-03 00:53:05.826185 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-03 00:42:35.067643', 'end': '2025-09-03 00:42:35.362450', 'delta': '0:00:00.294807', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.826209 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-03 00:42:35.924758', 'end': '2025-09-03 00:42:36.257949', 'delta': '0:00:00.333191', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.826352 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-03 00:42:36.919918', 'end': '2025-09-03 00:42:37.235010', 'delta': '0:00:00.315092', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.826379 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.826390 | orchestrator | 2025-09-03 00:53:05.826402 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-03 00:53:05.826413 | orchestrator | Wednesday 03 September 2025 00:42:39 +0000 (0:00:00.647) 0:00:16.082 *** 2025-09-03 00:53:05.826424 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.826435 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.826446 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.826457 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.826468 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.826478 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.826489 | orchestrator | 2025-09-03 00:53:05.826501 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-03 00:53:05.826512 | orchestrator | Wednesday 03 September 2025 00:42:41 +0000 (0:00:01.769) 0:00:17.851 *** 2025-09-03 00:53:05.826523 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-03 00:53:05.826534 | orchestrator | 2025-09-03 00:53:05.826545 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-03 00:53:05.826556 | orchestrator | Wednesday 03 September 2025 00:42:43 +0000 (0:00:01.841) 0:00:19.693 *** 2025-09-03 00:53:05.826567 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.826578 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.826589 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.826600 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.826612 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.826622 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.826633 | orchestrator | 2025-09-03 00:53:05.826645 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-03 00:53:05.826656 | orchestrator | Wednesday 03 September 2025 00:42:44 +0000 (0:00:01.186) 0:00:20.879 *** 2025-09-03 00:53:05.826667 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.826678 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.826689 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.826700 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.826919 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.826933 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.826944 | orchestrator | 2025-09-03 00:53:05.826984 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-03 00:53:05.826996 | orchestrator | Wednesday 03 September 2025 00:42:45 +0000 (0:00:01.479) 0:00:22.358 *** 2025-09-03 00:53:05.827007 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.827018 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.827029 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.827040 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.827051 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.827062 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.827072 | orchestrator | 2025-09-03 00:53:05.827084 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-03 00:53:05.827095 | orchestrator | Wednesday 03 September 2025 00:42:46 +0000 (0:00:00.773) 0:00:23.132 *** 2025-09-03 00:53:05.827105 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.827116 | orchestrator | 2025-09-03 00:53:05.827127 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-03 00:53:05.827138 | orchestrator | Wednesday 03 September 2025 00:42:46 +0000 (0:00:00.112) 0:00:23.244 *** 2025-09-03 00:53:05.827149 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.827160 | orchestrator | 2025-09-03 00:53:05.827171 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-03 00:53:05.827182 | orchestrator | Wednesday 03 September 2025 00:42:47 +0000 (0:00:00.290) 0:00:23.535 *** 2025-09-03 00:53:05.827198 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.827210 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.827221 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.827232 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.827244 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.827255 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.827266 | orchestrator | 2025-09-03 00:53:05.827287 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-03 00:53:05.827299 | orchestrator | Wednesday 03 September 2025 00:42:47 +0000 (0:00:00.743) 0:00:24.278 *** 2025-09-03 00:53:05.827310 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.827321 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.827332 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.827392 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.827404 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.827415 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.827426 | orchestrator | 2025-09-03 00:53:05.827437 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-03 00:53:05.827448 | orchestrator | Wednesday 03 September 2025 00:42:48 +0000 (0:00:00.866) 0:00:25.145 *** 2025-09-03 00:53:05.827459 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.827470 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.827481 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.827493 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.827504 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.827515 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.827525 | orchestrator | 2025-09-03 00:53:05.827537 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-03 00:53:05.827548 | orchestrator | Wednesday 03 September 2025 00:42:49 +0000 (0:00:00.732) 0:00:25.878 *** 2025-09-03 00:53:05.827560 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.827571 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.827582 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.827592 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.827603 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.827615 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.827626 | orchestrator | 2025-09-03 00:53:05.827637 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-03 00:53:05.827648 | orchestrator | Wednesday 03 September 2025 00:42:50 +0000 (0:00:00.726) 0:00:26.604 *** 2025-09-03 00:53:05.827668 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.827679 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.827690 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.827701 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.827712 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.827723 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.827734 | orchestrator | 2025-09-03 00:53:05.827746 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-03 00:53:05.827757 | orchestrator | Wednesday 03 September 2025 00:42:50 +0000 (0:00:00.662) 0:00:27.267 *** 2025-09-03 00:53:05.827769 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.827780 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.827791 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.827802 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.827813 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.827825 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.827835 | orchestrator | 2025-09-03 00:53:05.827847 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-03 00:53:05.827858 | orchestrator | Wednesday 03 September 2025 00:42:51 +0000 (0:00:00.827) 0:00:28.094 *** 2025-09-03 00:53:05.827869 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.827880 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.827892 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.827903 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.827914 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.828179 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.828192 | orchestrator | 2025-09-03 00:53:05.828204 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-03 00:53:05.828215 | orchestrator | Wednesday 03 September 2025 00:42:52 +0000 (0:00:00.772) 0:00:28.867 *** 2025-09-03 00:53:05.828228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d05881db--8953--52a0--98ec--dd1036bee846-osd--block--d05881db--8953--52a0--98ec--dd1036bee846', 'dm-uuid-LVM-vVB7WYB05SG5ksLYtniNiR4wu8glVMPWKhYtoiaiSt5OIqt1nPLfaqf1U7zjf7YR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828242 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2e5a0ee6--219f--5b14--b340--2bfd497a8fc5-osd--block--2e5a0ee6--219f--5b14--b340--2bfd497a8fc5', 'dm-uuid-LVM-1KmnNiiQVjzl7pN9nTYyE5njRkbNrYz4h9XU6mMO0bdkLOvKg9lVlzPT5w2fmM4x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828365 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--400ae980--4c36--5b9b--960d--631158f9c2c9-osd--block--400ae980--4c36--5b9b--960d--631158f9c2c9', 'dm-uuid-LVM-IDjOLjbNgO5Gcv2cLb1cPZmsNftrK9fCNyLEUMtCihcv5KL0yIzEzjBRtXrX5eQW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1107a6cb--8e5a--5215--8b60--1d473d685075-osd--block--1107a6cb--8e5a--5215--8b60--1d473d685075', 'dm-uuid-LVM-oNSc4vHMRM98uwbAfYefcePJlvTU2Nwwkd7GNCBmrAmQOPr2gvTWdfLuYAQHjDSI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77', 'scsi-SQEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part1', 'scsi-SQEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part14', 'scsi-SQEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part15', 'scsi-SQEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part16', 'scsi-SQEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:53:05.828464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d05881db--8953--52a0--98ec--dd1036bee846-osd--block--d05881db--8953--52a0--98ec--dd1036bee846'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dNVKS1-h0I6-cKeQ-KM7E-yqkM-njrQ-MJtXNz', 'scsi-0QEMU_QEMU_HARDDISK_9ba28649-84e7-4d30-a12b-e93c6e95fbcd', 'scsi-SQEMU_QEMU_HARDDISK_9ba28649-84e7-4d30-a12b-e93c6e95fbcd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:53:05.828489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2e5a0ee6--219f--5b14--b340--2bfd497a8fc5-osd--block--2e5a0ee6--219f--5b14--b340--2bfd497a8fc5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CLcDvK-RJok-AP8L-Ull5-Xzqq-bCBS-35j80d', 'scsi-0QEMU_QEMU_HARDDISK_7512b390-1fa3-4840-9943-7c6482fdb145', 'scsi-SQEMU_QEMU_HARDDISK_7512b390-1fa3-4840-9943-7c6482fdb145'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:53:05.828511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e885087e-46ab-46e4-825b-bdcddcbfdff8', 'scsi-SQEMU_QEMU_HARDDISK_e885087e-46ab-46e4-825b-bdcddcbfdff8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:53:05.828535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-03-00-02-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:53:05.828558 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e75c81d9--f6c1--538f--9534--cc9e3445127a-osd--block--e75c81d9--f6c1--538f--9534--cc9e3445127a', 'dm-uuid-LVM-uV6mu7VkeLpyFdoMIvc3kKIapvcN5sCpS5UiCwvVt0Ysgo8oPMe1pPugUZ86q7Qi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828598 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--634e15af--8858--53e6--9f62--917e12b08878-osd--block--634e15af--8858--53e6--9f62--917e12b08878', 'dm-uuid-LVM-7atDwx2fefji4hgurJcmdtXUoHrK2uhSrGDUFw19zEE3Dr1YqTd7rS8tCRJuyUGB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828649 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828696 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae', 'scsi-SQEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part1', 'scsi-SQEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part14', 'scsi-SQEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part15', 'scsi-SQEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part16', 'scsi-SQEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:53:05.828799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828811 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--400ae980--4c36--5b9b--960d--631158f9c2c9-osd--block--400ae980--4c36--5b9b--960d--631158f9c2c9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Mv3N52-jcqW-f5oK-qm4Y-NwnR-LFlk-3Lul3G', 'scsi-0QEMU_QEMU_HARDDISK_f4ffaa61-7d7a-4b4d-ae66-bf9c1470deb3', 'scsi-SQEMU_QEMU_HARDDISK_f4ffaa61-7d7a-4b4d-ae66-bf9c1470deb3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:53:05.828823 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1107a6cb--8e5a--5215--8b60--1d473d685075-osd--block--1107a6cb--8e5a--5215--8b60--1d473d685075'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-k9rpRE-oQfZ-x2kK-rb0E-T1A0-6v1a-SlK6kI', 'scsi-0QEMU_QEMU_HARDDISK_89937d38-622a-4519-a70d-71f9b6cc380e', 'scsi-SQEMU_QEMU_HARDDISK_89937d38-622a-4519-a70d-71f9b6cc380e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:53:05.828863 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.828882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2aa4af3c-ac98-453f-b557-6d0c203c4201', 'scsi-SQEMU_QEMU_HARDDISK_2aa4af3c-ac98-453f-b557-6d0c203c4201'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:53:05.828895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828906 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-03-00-02-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:53:05.828919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a', 'scsi-SQEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part1', 'scsi-SQEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part14', 'scsi-SQEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part15', 'scsi-SQEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part16', 'scsi-SQEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:53:05.828942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.828982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e75c81d9--f6c1--538f--9534--cc9e3445127a-osd--block--e75c81d9--f6c1--538f--9534--cc9e3445127a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6DRCRF-1ldo-e2BN-8keY-ovu4-LPee-swW9qe', 'scsi-0QEMU_QEMU_HARDDISK_409307c9-8e7f-483b-a404-5462fce46233', 'scsi-SQEMU_QEMU_HARDDISK_409307c9-8e7f-483b-a404-5462fce46233'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:53:05.828995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.829006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.829018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.829029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.829040 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--634e15af--8858--53e6--9f62--917e12b08878-osd--block--634e15af--8858--53e6--9f62--917e12b08878'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UTntes-z8qa-dWgQ-K8BI-IKj9-wLWC-XmEeXz', 'scsi-0QEMU_QEMU_HARDDISK_ce19fbd3-6a41-4577-8f91-9183654abf8c', 'scsi-SQEMU_QEMU_HARDDISK_ce19fbd3-6a41-4577-8f91-9183654abf8c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:53:05.829059 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4852aea-51af-4111-8e77-3990a105da37', 'scsi-SQEMU_QEMU_HARDDISK_d4852aea-51af-4111-8e77-3990a105da37'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:53:05.829083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.829095 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-03-00-02-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:53:05.829107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.829118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.829129 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.829142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a8d7701-fa3c-42c5-9179-39e748f0f96d', 'scsi-SQEMU_QEMU_HARDDISK_8a8d7701-fa3c-42c5-9179-39e748f0f96d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a8d7701-fa3c-42c5-9179-39e748f0f96d-part1', 'scsi-SQEMU_QEMU_HARDDISK_8a8d7701-fa3c-42c5-9179-39e748f0f96d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a8d7701-fa3c-42c5-9179-39e748f0f96d-part14', 'scsi-SQEMU_QEMU_HARDDISK_8a8d7701-fa3c-42c5-9179-39e748f0f96d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a8d7701-fa3c-42c5-9179-39e748f0f96d-part15', 'scsi-SQEMU_QEMU_HARDDISK_8a8d7701-fa3c-42c5-9179-39e748f0f96d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a8d7701-fa3c-42c5-9179-39e748f0f96d-part16', 'scsi-SQEMU_QEMU_HARDDISK_8a8d7701-fa3c-42c5-9179-39e748f0f96d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:53:05.829174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-03-00-02-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:53:05.829186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.829198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.829209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.829220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.829232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.829243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.829254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.829274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.829300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9bc7981-e388-467f-b59a-2076c31d0343', 'scsi-SQEMU_QEMU_HARDDISK_c9bc7981-e388-467f-b59a-2076c31d0343'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9bc7981-e388-467f-b59a-2076c31d0343-part1', 'scsi-SQEMU_QEMU_HARDDISK_c9bc7981-e388-467f-b59a-2076c31d0343-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9bc7981-e388-467f-b59a-2076c31d0343-part14', 'scsi-SQEMU_QEMU_HARDDISK_c9bc7981-e388-467f-b59a-2076c31d0343-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9bc7981-e388-467f-b59a-2076c31d0343-part15', 'scsi-SQEMU_QEMU_HARDDISK_c9bc7981-e388-467f-b59a-2076c31d0343-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9bc7981-e388-467f-b59a-2076c31d0343-part16', 'scsi-SQEMU_QEMU_HARDDISK_c9bc7981-e388-467f-b59a-2076c31d0343-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:53:05.829314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-03-00-02-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:53:05.829326 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.829338 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.829349 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.829360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.829378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.829390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.829406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.829425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.829437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.829449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.829460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:53:05.829472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc3399e-7952-4fd6-9ff6-a2b0255266c3', 'scsi-SQEMU_QEMU_HARDDISK_5cc3399e-7952-4fd6-9ff6-a2b0255266c3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc3399e-7952-4fd6-9ff6-a2b0255266c3-part1', 'scsi-SQEMU_QEMU_HARDDISK_5cc3399e-7952-4fd6-9ff6-a2b0255266c3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc3399e-7952-4fd6-9ff6-a2b0255266c3-part14', 'scsi-SQEMU_QEMU_HARDDISK_5cc3399e-7952-4fd6-9ff6-a2b0255266c3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc3399e-7952-4fd6-9ff6-a2b0255266c3-part15', 'scsi-SQEMU_QEMU_HARDDISK_5cc3399e-7952-4fd6-9ff6-a2b0255266c3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc3399e-7952-4fd6-9ff6-a2b0255266c3-part16', 'scsi-SQEMU_QEMU_HARDDISK_5cc3399e-7952-4fd6-9ff6-a2b0255266c3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:53:05.829516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-03-00-02-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:53:05.829529 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.829540 | orchestrator | 2025-09-03 00:53:05.829551 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-03 00:53:05.829563 | orchestrator | Wednesday 03 September 2025 00:42:53 +0000 (0:00:01.283) 0:00:30.150 *** 2025-09-03 00:53:05.829575 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d05881db--8953--52a0--98ec--dd1036bee846-osd--block--d05881db--8953--52a0--98ec--dd1036bee846', 'dm-uuid-LVM-vVB7WYB05SG5ksLYtniNiR4wu8glVMPWKhYtoiaiSt5OIqt1nPLfaqf1U7zjf7YR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829589 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2e5a0ee6--219f--5b14--b340--2bfd497a8fc5-osd--block--2e5a0ee6--219f--5b14--b340--2bfd497a8fc5', 'dm-uuid-LVM-1KmnNiiQVjzl7pN9nTYyE5njRkbNrYz4h9XU6mMO0bdkLOvKg9lVlzPT5w2fmM4x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829601 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829620 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829632 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829657 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829670 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829682 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829693 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829711 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--400ae980--4c36--5b9b--960d--631158f9c2c9-osd--block--400ae980--4c36--5b9b--960d--631158f9c2c9', 'dm-uuid-LVM-IDjOLjbNgO5Gcv2cLb1cPZmsNftrK9fCNyLEUMtCihcv5KL0yIzEzjBRtXrX5eQW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829723 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829751 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77', 'scsi-SQEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part1', 'scsi-SQEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part14', 'scsi-SQEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part15', 'scsi-SQEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part16', 'scsi-SQEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829765 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e75c81d9--f6c1--538f--9534--cc9e3445127a-osd--block--e75c81d9--f6c1--538f--9534--cc9e3445127a', 'dm-uuid-LVM-uV6mu7VkeLpyFdoMIvc3kKIapvcN5sCpS5UiCwvVt0Ysgo8oPMe1pPugUZ86q7Qi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829783 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--634e15af--8858--53e6--9f62--917e12b08878-osd--block--634e15af--8858--53e6--9f62--917e12b08878', 'dm-uuid-LVM-7atDwx2fefji4hgurJcmdtXUoHrK2uhSrGDUFw19zEE3Dr1YqTd7rS8tCRJuyUGB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829795 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829819 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829832 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d05881db--8953--52a0--98ec--dd1036bee846-osd--block--d05881db--8953--52a0--98ec--dd1036bee846'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dNVKS1-h0I6-cKeQ-KM7E-yqkM-njrQ-MJtXNz', 'scsi-0QEMU_QEMU_HARDDISK_9ba28649-84e7-4d30-a12b-e93c6e95fbcd', 'scsi-SQEMU_QEMU_HARDDISK_9ba28649-84e7-4d30-a12b-e93c6e95fbcd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829844 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829862 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829873 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829885 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829909 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829921 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2e5a0ee6--219f--5b14--b340--2bfd497a8fc5-osd--block--2e5a0ee6--219f--5b14--b340--2bfd497a8fc5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CLcDvK-RJok-AP8L-Ull5-Xzqq-bCBS-35j80d', 'scsi-0QEMU_QEMU_HARDDISK_7512b390-1fa3-4840-9943-7c6482fdb145', 'scsi-SQEMU_QEMU_HARDDISK_7512b390-1fa3-4840-9943-7c6482fdb145'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829933 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829951 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829980 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.829992 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.830725 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.830754 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.830767 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e885087e-46ab-46e4-825b-bdcddcbfdff8', 'scsi-SQEMU_QEMU_HARDDISK_e885087e-46ab-46e4-825b-bdcddcbfdff8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.830789 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.830800 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.830812 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1107a6cb--8e5a--5215--8b60--1d473d685075-osd--block--1107a6cb--8e5a--5215--8b60--1d473d685075', 'dm-uuid-LVM-oNSc4vHMRM98uwbAfYefcePJlvTU2Nwwkd7GNCBmrAmQOPr2gvTWdfLuYAQHjDSI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.830838 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-03-00-02-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.830851 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.830864 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a8d7701-fa3c-42c5-9179-39e748f0f96d', 'scsi-SQEMU_QEMU_HARDDISK_8a8d7701-fa3c-42c5-9179-39e748f0f96d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a8d7701-fa3c-42c5-9179-39e748f0f96d-part1', 'scsi-SQEMU_QEMU_HARDDISK_8a8d7701-fa3c-42c5-9179-39e748f0f96d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a8d7701-fa3c-42c5-9179-39e748f0f96d-part14', 'scsi-SQEMU_QEMU_HARDDISK_8a8d7701-fa3c-42c5-9179-39e748f0f96d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a8d7701-fa3c-42c5-9179-39e748f0f96d-part15', 'scsi-SQEMU_QEMU_HARDDISK_8a8d7701-fa3c-42c5-9179-39e748f0f96d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a8d7701-fa3c-42c5-9179-39e748f0f96d-part16', 'scsi-SQEMU_QEMU_HARDDISK_8a8d7701-fa3c-42c5-9179-39e748f0f96d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.830898 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a', 'scsi-SQEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part1', 'scsi-SQEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part14', 'scsi-SQEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part15', 'scsi-SQEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part16', 'scsi-SQEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.830918 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.830931 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-03-00-02-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.830943 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e75c81d9--f6c1--538f--9534--cc9e3445127a-osd--block--e75c81d9--f6c1--538f--9534--cc9e3445127a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6DRCRF-1ldo-e2BN-8keY-ovu4-LPee-swW9qe', 'scsi-0QEMU_QEMU_HARDDISK_409307c9-8e7f-483b-a404-5462fce46233', 'scsi-SQEMU_QEMU_HARDDISK_409307c9-8e7f-483b-a404-5462fce46233'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831024 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831040 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831051 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--634e15af--8858--53e6--9f62--917e12b08878-osd--block--634e15af--8858--53e6--9f62--917e12b08878'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UTntes-z8qa-dWgQ-K8BI-IKj9-wLWC-XmEeXz', 'scsi-0QEMU_QEMU_HARDDISK_ce19fbd3-6a41-4577-8f91-9183654abf8c', 'scsi-SQEMU_QEMU_HARDDISK_ce19fbd3-6a41-4577-8f91-9183654abf8c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831075 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831087 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4852aea-51af-4111-8e77-3990a105da37', 'scsi-SQEMU_QEMU_HARDDISK_d4852aea-51af-4111-8e77-3990a105da37'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831102 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831119 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-03-00-02-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831130 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831147 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831157 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831168 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831178 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831198 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831210 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831226 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.831236 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.831248 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9bc7981-e388-467f-b59a-2076c31d0343', 'scsi-SQEMU_QEMU_HARDDISK_c9bc7981-e388-467f-b59a-2076c31d0343'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9bc7981-e388-467f-b59a-2076c31d0343-part1', 'scsi-SQEMU_QEMU_HARDDISK_c9bc7981-e388-467f-b59a-2076c31d0343-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9bc7981-e388-467f-b59a-2076c31d0343-part14', 'scsi-SQEMU_QEMU_HARDDISK_c9bc7981-e388-467f-b59a-2076c31d0343-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9bc7981-e388-467f-b59a-2076c31d0343-part15', 'scsi-SQEMU_QEMU_HARDDISK_c9bc7981-e388-467f-b59a-2076c31d0343-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9bc7981-e388-467f-b59a-2076c31d0343-part16', 'scsi-SQEMU_QEMU_HARDDISK_c9bc7981-e388-467f-b59a-2076c31d0343-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831259 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.831270 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831291 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-03-00-02-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831310 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831323 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.831336 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831348 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831360 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831372 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831393 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831407 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae', 'scsi-SQEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part1', 'scsi-SQEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part14', 'scsi-SQEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part15', 'scsi-SQEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part16', 'scsi-SQEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831426 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831438 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831462 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--400ae980--4c36--5b9b--960d--631158f9c2c9-osd--block--400ae980--4c36--5b9b--960d--631158f9c2c9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Mv3N52-jcqW-f5oK-qm4Y-NwnR-LFlk-3Lul3G', 'scsi-0QEMU_QEMU_HARDDISK_f4ffaa61-7d7a-4b4d-ae66-bf9c1470deb3', 'scsi-SQEMU_QEMU_HARDDISK_f4ffaa61-7d7a-4b4d-ae66-bf9c1470deb3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831480 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831493 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1107a6cb--8e5a--5215--8b60--1d473d685075-osd--block--1107a6cb--8e5a--5215--8b60--1d473d685075'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-k9rpRE-oQfZ-x2kK-rb0E-T1A0-6v1a-SlK6kI', 'scsi-0QEMU_QEMU_HARDDISK_89937d38-622a-4519-a70d-71f9b6cc380e', 'scsi-SQEMU_QEMU_HARDDISK_89937d38-622a-4519-a70d-71f9b6cc380e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831504 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831516 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831537 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2aa4af3c-ac98-453f-b557-6d0c203c4201', 'scsi-SQEMU_QEMU_HARDDISK_2aa4af3c-ac98-453f-b557-6d0c203c4201'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831556 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc3399e-7952-4fd6-9ff6-a2b0255266c3', 'scsi-SQEMU_QEMU_HARDDISK_5cc3399e-7952-4fd6-9ff6-a2b0255266c3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc3399e-7952-4fd6-9ff6-a2b0255266c3-part1', 'scsi-SQEMU_QEMU_HARDDISK_5cc3399e-7952-4fd6-9ff6-a2b0255266c3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc3399e-7952-4fd6-9ff6-a2b0255266c3-part14', 'scsi-SQEMU_QEMU_HARDDISK_5cc3399e-7952-4fd6-9ff6-a2b0255266c3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc3399e-7952-4fd6-9ff6-a2b0255266c3-part15', 'scsi-SQEMU_QEMU_HARDDISK_5cc3399e-7952-4fd6-9ff6-a2b0255266c3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc3399e-7952-4fd6-9ff6-a2b0255266c3-part16', 'scsi-SQEMU_QEMU_HARDDISK_5cc3399e-7952-4fd6-9ff6-a2b0255266c3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831570 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-03-00-02-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831582 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-03-00-02-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:53:05.831597 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.831607 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.831622 | orchestrator | 2025-09-03 00:53:05.831633 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-03 00:53:05.831644 | orchestrator | Wednesday 03 September 2025 00:42:54 +0000 (0:00:01.047) 0:00:31.197 *** 2025-09-03 00:53:05.831659 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.831670 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.831680 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.831690 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.831700 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.831710 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.831720 | orchestrator | 2025-09-03 00:53:05.831730 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-03 00:53:05.831739 | orchestrator | Wednesday 03 September 2025 00:42:55 +0000 (0:00:01.303) 0:00:32.501 *** 2025-09-03 00:53:05.831749 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.831759 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.831769 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.831779 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.831789 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.831798 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.831808 | orchestrator | 2025-09-03 00:53:05.831818 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-03 00:53:05.831828 | orchestrator | Wednesday 03 September 2025 00:42:56 +0000 (0:00:00.622) 0:00:33.124 *** 2025-09-03 00:53:05.831838 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.831847 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.831857 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.831867 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.831877 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.831887 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.831896 | orchestrator | 2025-09-03 00:53:05.831906 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-03 00:53:05.831916 | orchestrator | Wednesday 03 September 2025 00:42:57 +0000 (0:00:00.915) 0:00:34.039 *** 2025-09-03 00:53:05.831926 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.831936 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.831945 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.831955 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.832006 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.832017 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.832027 | orchestrator | 2025-09-03 00:53:05.832036 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-03 00:53:05.832046 | orchestrator | Wednesday 03 September 2025 00:42:58 +0000 (0:00:00.604) 0:00:34.644 *** 2025-09-03 00:53:05.832056 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.832066 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.832076 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.832086 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.832096 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.832105 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.832115 | orchestrator | 2025-09-03 00:53:05.832125 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-03 00:53:05.832135 | orchestrator | Wednesday 03 September 2025 00:42:59 +0000 (0:00:01.019) 0:00:35.663 *** 2025-09-03 00:53:05.832145 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.832155 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.832165 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.832174 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.832184 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.832194 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.832203 | orchestrator | 2025-09-03 00:53:05.832213 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-03 00:53:05.832223 | orchestrator | Wednesday 03 September 2025 00:43:00 +0000 (0:00:01.496) 0:00:37.160 *** 2025-09-03 00:53:05.832242 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-03 00:53:05.832252 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-03 00:53:05.832262 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-03 00:53:05.832272 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-03 00:53:05.832281 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-03 00:53:05.832291 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-03 00:53:05.832301 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-03 00:53:05.832310 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-03 00:53:05.832320 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-03 00:53:05.832329 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-03 00:53:05.832339 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-03 00:53:05.832348 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-03 00:53:05.832358 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-03 00:53:05.832367 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-03 00:53:05.832376 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-03 00:53:05.832384 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-03 00:53:05.832392 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-03 00:53:05.832400 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-03 00:53:05.832408 | orchestrator | 2025-09-03 00:53:05.832416 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-03 00:53:05.832424 | orchestrator | Wednesday 03 September 2025 00:43:04 +0000 (0:00:03.465) 0:00:40.626 *** 2025-09-03 00:53:05.832432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-03 00:53:05.832440 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-03 00:53:05.832447 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-03 00:53:05.832455 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.832463 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-03 00:53:05.832471 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-03 00:53:05.832483 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-03 00:53:05.832491 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.832499 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-03 00:53:05.832507 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-03 00:53:05.832520 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-03 00:53:05.832528 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-03 00:53:05.832536 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-03 00:53:05.832544 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.832552 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-03 00:53:05.832561 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-03 00:53:05.832568 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-03 00:53:05.832576 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-03 00:53:05.832584 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.832592 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.832601 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-03 00:53:05.832609 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-03 00:53:05.832617 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-03 00:53:05.832625 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.832633 | orchestrator | 2025-09-03 00:53:05.832641 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-03 00:53:05.832649 | orchestrator | Wednesday 03 September 2025 00:43:05 +0000 (0:00:00.942) 0:00:41.569 *** 2025-09-03 00:53:05.832661 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.832670 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.832678 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.832686 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.832694 | orchestrator | 2025-09-03 00:53:05.832703 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-03 00:53:05.832711 | orchestrator | Wednesday 03 September 2025 00:43:06 +0000 (0:00:01.801) 0:00:43.371 *** 2025-09-03 00:53:05.832720 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.832728 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.832736 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.832744 | orchestrator | 2025-09-03 00:53:05.832752 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-03 00:53:05.832760 | orchestrator | Wednesday 03 September 2025 00:43:07 +0000 (0:00:00.580) 0:00:43.951 *** 2025-09-03 00:53:05.832768 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.832776 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.832784 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.832792 | orchestrator | 2025-09-03 00:53:05.832801 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-03 00:53:05.832809 | orchestrator | Wednesday 03 September 2025 00:43:07 +0000 (0:00:00.492) 0:00:44.443 *** 2025-09-03 00:53:05.832817 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.832825 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.832833 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.832841 | orchestrator | 2025-09-03 00:53:05.832849 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-03 00:53:05.832858 | orchestrator | Wednesday 03 September 2025 00:43:08 +0000 (0:00:00.430) 0:00:44.874 *** 2025-09-03 00:53:05.832866 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.832874 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.832882 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.832890 | orchestrator | 2025-09-03 00:53:05.832898 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-03 00:53:05.832906 | orchestrator | Wednesday 03 September 2025 00:43:08 +0000 (0:00:00.480) 0:00:45.354 *** 2025-09-03 00:53:05.832914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-03 00:53:05.832923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-03 00:53:05.832931 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-03 00:53:05.832939 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.832947 | orchestrator | 2025-09-03 00:53:05.832955 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-03 00:53:05.832977 | orchestrator | Wednesday 03 September 2025 00:43:09 +0000 (0:00:00.527) 0:00:45.882 *** 2025-09-03 00:53:05.832985 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-03 00:53:05.832993 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-03 00:53:05.833001 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-03 00:53:05.833009 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.833017 | orchestrator | 2025-09-03 00:53:05.833025 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-03 00:53:05.833033 | orchestrator | Wednesday 03 September 2025 00:43:09 +0000 (0:00:00.493) 0:00:46.375 *** 2025-09-03 00:53:05.833041 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-03 00:53:05.833049 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-03 00:53:05.833057 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-03 00:53:05.833065 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.833073 | orchestrator | 2025-09-03 00:53:05.833081 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-03 00:53:05.833096 | orchestrator | Wednesday 03 September 2025 00:43:10 +0000 (0:00:00.628) 0:00:47.004 *** 2025-09-03 00:53:05.833104 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.833112 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.833120 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.833128 | orchestrator | 2025-09-03 00:53:05.833136 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-03 00:53:05.833148 | orchestrator | Wednesday 03 September 2025 00:43:10 +0000 (0:00:00.461) 0:00:47.465 *** 2025-09-03 00:53:05.833156 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-03 00:53:05.833164 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-03 00:53:05.833173 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-03 00:53:05.833181 | orchestrator | 2025-09-03 00:53:05.833193 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-03 00:53:05.833202 | orchestrator | Wednesday 03 September 2025 00:43:12 +0000 (0:00:01.115) 0:00:48.580 *** 2025-09-03 00:53:05.833210 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-03 00:53:05.833218 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-03 00:53:05.833226 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-03 00:53:05.833234 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-03 00:53:05.833242 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-03 00:53:05.833250 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-03 00:53:05.833259 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-03 00:53:05.833267 | orchestrator | 2025-09-03 00:53:05.833275 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-03 00:53:05.833283 | orchestrator | Wednesday 03 September 2025 00:43:12 +0000 (0:00:00.663) 0:00:49.244 *** 2025-09-03 00:53:05.833291 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-03 00:53:05.833299 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-03 00:53:05.833307 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-03 00:53:05.833315 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-03 00:53:05.833323 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-03 00:53:05.833331 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-03 00:53:05.833339 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-03 00:53:05.833347 | orchestrator | 2025-09-03 00:53:05.833354 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-03 00:53:05.833362 | orchestrator | Wednesday 03 September 2025 00:43:14 +0000 (0:00:01.654) 0:00:50.898 *** 2025-09-03 00:53:05.833371 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:53:05.833380 | orchestrator | 2025-09-03 00:53:05.833388 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-03 00:53:05.833396 | orchestrator | Wednesday 03 September 2025 00:43:15 +0000 (0:00:01.141) 0:00:52.039 *** 2025-09-03 00:53:05.833405 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:53:05.833413 | orchestrator | 2025-09-03 00:53:05.833421 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-03 00:53:05.833429 | orchestrator | Wednesday 03 September 2025 00:43:16 +0000 (0:00:01.195) 0:00:53.234 *** 2025-09-03 00:53:05.833443 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.833451 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.833460 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.833468 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.833476 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.833484 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.833492 | orchestrator | 2025-09-03 00:53:05.833500 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-03 00:53:05.833508 | orchestrator | Wednesday 03 September 2025 00:43:18 +0000 (0:00:01.957) 0:00:55.192 *** 2025-09-03 00:53:05.833516 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.833524 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.833532 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.833540 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.833549 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.833557 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.833565 | orchestrator | 2025-09-03 00:53:05.833573 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-03 00:53:05.833581 | orchestrator | Wednesday 03 September 2025 00:43:19 +0000 (0:00:01.014) 0:00:56.207 *** 2025-09-03 00:53:05.833589 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.833597 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.833605 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.833613 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.833621 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.833629 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.833637 | orchestrator | 2025-09-03 00:53:05.833645 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-03 00:53:05.833654 | orchestrator | Wednesday 03 September 2025 00:43:20 +0000 (0:00:00.861) 0:00:57.069 *** 2025-09-03 00:53:05.833662 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.833670 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.833678 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.833686 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.833694 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.833702 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.833710 | orchestrator | 2025-09-03 00:53:05.833718 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-03 00:53:05.833726 | orchestrator | Wednesday 03 September 2025 00:43:21 +0000 (0:00:00.642) 0:00:57.711 *** 2025-09-03 00:53:05.833734 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.833742 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.833754 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.833762 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.833770 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.833778 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.833786 | orchestrator | 2025-09-03 00:53:05.833794 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-03 00:53:05.833806 | orchestrator | Wednesday 03 September 2025 00:43:22 +0000 (0:00:00.981) 0:00:58.693 *** 2025-09-03 00:53:05.833814 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.833823 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.833831 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.833839 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.833847 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.833855 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.833863 | orchestrator | 2025-09-03 00:53:05.833871 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-03 00:53:05.833879 | orchestrator | Wednesday 03 September 2025 00:43:22 +0000 (0:00:00.426) 0:00:59.120 *** 2025-09-03 00:53:05.833887 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.833895 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.833903 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.833911 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.833925 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.833933 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.833941 | orchestrator | 2025-09-03 00:53:05.833949 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-03 00:53:05.833957 | orchestrator | Wednesday 03 September 2025 00:43:23 +0000 (0:00:00.581) 0:00:59.701 *** 2025-09-03 00:53:05.833978 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.833986 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.833994 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.834002 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.834010 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.834053 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.834062 | orchestrator | 2025-09-03 00:53:05.834070 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-03 00:53:05.834078 | orchestrator | Wednesday 03 September 2025 00:43:24 +0000 (0:00:01.009) 0:01:00.711 *** 2025-09-03 00:53:05.834087 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.834095 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.834103 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.834111 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.834119 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.834127 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.834135 | orchestrator | 2025-09-03 00:53:05.834143 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-03 00:53:05.834151 | orchestrator | Wednesday 03 September 2025 00:43:25 +0000 (0:00:01.099) 0:01:01.810 *** 2025-09-03 00:53:05.834159 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.834167 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.834175 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.834183 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.834191 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.834200 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.834208 | orchestrator | 2025-09-03 00:53:05.834216 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-03 00:53:05.834224 | orchestrator | Wednesday 03 September 2025 00:43:26 +0000 (0:00:01.029) 0:01:02.839 *** 2025-09-03 00:53:05.834232 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.834240 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.834248 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.834256 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.834264 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.834272 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.834280 | orchestrator | 2025-09-03 00:53:05.834288 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-03 00:53:05.834296 | orchestrator | Wednesday 03 September 2025 00:43:27 +0000 (0:00:00.821) 0:01:03.661 *** 2025-09-03 00:53:05.834304 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.834312 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.834320 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.834328 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.834336 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.834345 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.834353 | orchestrator | 2025-09-03 00:53:05.834361 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-03 00:53:05.834369 | orchestrator | Wednesday 03 September 2025 00:43:27 +0000 (0:00:00.661) 0:01:04.322 *** 2025-09-03 00:53:05.834377 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.834385 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.834393 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.834401 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.834409 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.834417 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.834425 | orchestrator | 2025-09-03 00:53:05.834433 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-03 00:53:05.834441 | orchestrator | Wednesday 03 September 2025 00:43:28 +0000 (0:00:00.587) 0:01:04.910 *** 2025-09-03 00:53:05.834455 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.834463 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.834471 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.834479 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.834488 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.834496 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.834504 | orchestrator | 2025-09-03 00:53:05.834512 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-03 00:53:05.834520 | orchestrator | Wednesday 03 September 2025 00:43:29 +0000 (0:00:00.743) 0:01:05.653 *** 2025-09-03 00:53:05.834528 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.834536 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.834544 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.834552 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.834560 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.834568 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.834576 | orchestrator | 2025-09-03 00:53:05.834584 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-03 00:53:05.834591 | orchestrator | Wednesday 03 September 2025 00:43:29 +0000 (0:00:00.568) 0:01:06.221 *** 2025-09-03 00:53:05.834603 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.834611 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.834619 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.834627 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.834636 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.834644 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.834652 | orchestrator | 2025-09-03 00:53:05.834677 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-03 00:53:05.834686 | orchestrator | Wednesday 03 September 2025 00:43:30 +0000 (0:00:00.819) 0:01:07.041 *** 2025-09-03 00:53:05.834693 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.834702 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.834710 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.834718 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.834726 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.834734 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.834765 | orchestrator | 2025-09-03 00:53:05.834773 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-03 00:53:05.834781 | orchestrator | Wednesday 03 September 2025 00:43:31 +0000 (0:00:00.774) 0:01:07.815 *** 2025-09-03 00:53:05.834790 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.834798 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.834806 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.834813 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.834821 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.834829 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.834837 | orchestrator | 2025-09-03 00:53:05.834845 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-03 00:53:05.834853 | orchestrator | Wednesday 03 September 2025 00:43:32 +0000 (0:00:00.766) 0:01:08.582 *** 2025-09-03 00:53:05.834861 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.834869 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.834877 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.834885 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.834893 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.834901 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.834909 | orchestrator | 2025-09-03 00:53:05.834917 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-03 00:53:05.834925 | orchestrator | Wednesday 03 September 2025 00:43:33 +0000 (0:00:01.028) 0:01:09.611 *** 2025-09-03 00:53:05.834933 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.834941 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.834949 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.834957 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.835006 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.835016 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.835024 | orchestrator | 2025-09-03 00:53:05.835032 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-03 00:53:05.835040 | orchestrator | Wednesday 03 September 2025 00:43:34 +0000 (0:00:01.407) 0:01:11.019 *** 2025-09-03 00:53:05.835048 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.835056 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.835064 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.835072 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.835080 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.835088 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.835096 | orchestrator | 2025-09-03 00:53:05.835104 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-03 00:53:05.835112 | orchestrator | Wednesday 03 September 2025 00:43:36 +0000 (0:00:01.929) 0:01:12.948 *** 2025-09-03 00:53:05.835120 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:53:05.835128 | orchestrator | 2025-09-03 00:53:05.835136 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-03 00:53:05.835144 | orchestrator | Wednesday 03 September 2025 00:43:37 +0000 (0:00:01.003) 0:01:13.952 *** 2025-09-03 00:53:05.835152 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.835160 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.835168 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.835176 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.835184 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.835192 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.835200 | orchestrator | 2025-09-03 00:53:05.835208 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-03 00:53:05.835216 | orchestrator | Wednesday 03 September 2025 00:43:38 +0000 (0:00:00.836) 0:01:14.789 *** 2025-09-03 00:53:05.835224 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.835231 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.835239 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.835247 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.835255 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.835263 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.835271 | orchestrator | 2025-09-03 00:53:05.835279 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-03 00:53:05.835287 | orchestrator | Wednesday 03 September 2025 00:43:39 +0000 (0:00:00.754) 0:01:15.543 *** 2025-09-03 00:53:05.835295 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-03 00:53:05.835303 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-03 00:53:05.835311 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-03 00:53:05.835319 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-03 00:53:05.835327 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-03 00:53:05.835335 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-03 00:53:05.835343 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-03 00:53:05.835350 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-03 00:53:05.835363 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-03 00:53:05.835371 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-03 00:53:05.835380 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-03 00:53:05.835407 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-03 00:53:05.835414 | orchestrator | 2025-09-03 00:53:05.835421 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-03 00:53:05.835428 | orchestrator | Wednesday 03 September 2025 00:43:40 +0000 (0:00:01.260) 0:01:16.804 *** 2025-09-03 00:53:05.835435 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.835441 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.835448 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.835455 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.835462 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.835468 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.835475 | orchestrator | 2025-09-03 00:53:05.835482 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-03 00:53:05.835488 | orchestrator | Wednesday 03 September 2025 00:43:41 +0000 (0:00:01.096) 0:01:17.900 *** 2025-09-03 00:53:05.835495 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.835502 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.835509 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.835515 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.835522 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.835529 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.835535 | orchestrator | 2025-09-03 00:53:05.835542 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-03 00:53:05.835549 | orchestrator | Wednesday 03 September 2025 00:43:41 +0000 (0:00:00.562) 0:01:18.463 *** 2025-09-03 00:53:05.835556 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.835562 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.835569 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.835576 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.835582 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.835589 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.835595 | orchestrator | 2025-09-03 00:53:05.835602 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-03 00:53:05.835609 | orchestrator | Wednesday 03 September 2025 00:43:42 +0000 (0:00:00.729) 0:01:19.192 *** 2025-09-03 00:53:05.835616 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.835622 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.835629 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.835636 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.835642 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.835649 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.835656 | orchestrator | 2025-09-03 00:53:05.835662 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-03 00:53:05.835669 | orchestrator | Wednesday 03 September 2025 00:43:43 +0000 (0:00:00.577) 0:01:19.770 *** 2025-09-03 00:53:05.835676 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:53:05.835683 | orchestrator | 2025-09-03 00:53:05.835690 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-03 00:53:05.835697 | orchestrator | Wednesday 03 September 2025 00:43:44 +0000 (0:00:01.089) 0:01:20.859 *** 2025-09-03 00:53:05.835704 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.835710 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.835717 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.835724 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.835731 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.835737 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.835744 | orchestrator | 2025-09-03 00:53:05.835751 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-03 00:53:05.835758 | orchestrator | Wednesday 03 September 2025 00:44:40 +0000 (0:00:55.762) 0:02:16.622 *** 2025-09-03 00:53:05.835764 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-03 00:53:05.835775 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-03 00:53:05.835782 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-03 00:53:05.835789 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.835796 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-03 00:53:05.835802 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-03 00:53:05.835809 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-03 00:53:05.835816 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.835822 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-03 00:53:05.835829 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-03 00:53:05.835836 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-03 00:53:05.835843 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.835850 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-03 00:53:05.835856 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-03 00:53:05.835863 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-03 00:53:05.835870 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.835876 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-03 00:53:05.835883 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-03 00:53:05.835893 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-03 00:53:05.835900 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.835907 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-03 00:53:05.835928 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-03 00:53:05.835935 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-03 00:53:05.835942 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.835949 | orchestrator | 2025-09-03 00:53:05.835956 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-03 00:53:05.835974 | orchestrator | Wednesday 03 September 2025 00:44:40 +0000 (0:00:00.630) 0:02:17.253 *** 2025-09-03 00:53:05.835982 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.835989 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.835996 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.836002 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.836009 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.836016 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.836022 | orchestrator | 2025-09-03 00:53:05.836029 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-03 00:53:05.836036 | orchestrator | Wednesday 03 September 2025 00:44:41 +0000 (0:00:00.871) 0:02:18.124 *** 2025-09-03 00:53:05.836043 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.836049 | orchestrator | 2025-09-03 00:53:05.836056 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-03 00:53:05.836063 | orchestrator | Wednesday 03 September 2025 00:44:41 +0000 (0:00:00.160) 0:02:18.285 *** 2025-09-03 00:53:05.836069 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.836076 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.836083 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.836089 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.836096 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.836103 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.836109 | orchestrator | 2025-09-03 00:53:05.836116 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-03 00:53:05.836130 | orchestrator | Wednesday 03 September 2025 00:44:42 +0000 (0:00:00.613) 0:02:18.898 *** 2025-09-03 00:53:05.836137 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.836143 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.836150 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.836157 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.836164 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.836170 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.836177 | orchestrator | 2025-09-03 00:53:05.836184 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-03 00:53:05.836191 | orchestrator | Wednesday 03 September 2025 00:44:43 +0000 (0:00:00.895) 0:02:19.793 *** 2025-09-03 00:53:05.836198 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.836204 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.836211 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.836218 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.836225 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.836231 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.836238 | orchestrator | 2025-09-03 00:53:05.836245 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-03 00:53:05.836252 | orchestrator | Wednesday 03 September 2025 00:44:43 +0000 (0:00:00.599) 0:02:20.393 *** 2025-09-03 00:53:05.836258 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.836265 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.836272 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.836279 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.836285 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.836292 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.836299 | orchestrator | 2025-09-03 00:53:05.836306 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-03 00:53:05.836313 | orchestrator | Wednesday 03 September 2025 00:44:46 +0000 (0:00:02.242) 0:02:22.635 *** 2025-09-03 00:53:05.836319 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.836326 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.836333 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.836339 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.836346 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.836353 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.836360 | orchestrator | 2025-09-03 00:53:05.836366 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-03 00:53:05.836373 | orchestrator | Wednesday 03 September 2025 00:44:46 +0000 (0:00:00.793) 0:02:23.428 *** 2025-09-03 00:53:05.836380 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:53:05.836388 | orchestrator | 2025-09-03 00:53:05.836395 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-03 00:53:05.836402 | orchestrator | Wednesday 03 September 2025 00:44:47 +0000 (0:00:01.070) 0:02:24.499 *** 2025-09-03 00:53:05.836408 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.836415 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.836422 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.836429 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.836435 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.836442 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.836449 | orchestrator | 2025-09-03 00:53:05.836456 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-03 00:53:05.836462 | orchestrator | Wednesday 03 September 2025 00:44:48 +0000 (0:00:00.657) 0:02:25.157 *** 2025-09-03 00:53:05.836469 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.836476 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.836483 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.836490 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.836496 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.836506 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.836513 | orchestrator | 2025-09-03 00:53:05.836520 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-03 00:53:05.836527 | orchestrator | Wednesday 03 September 2025 00:44:49 +0000 (0:00:00.568) 0:02:25.726 *** 2025-09-03 00:53:05.836533 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.836540 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.836547 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.836553 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.836560 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.836582 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.836590 | orchestrator | 2025-09-03 00:53:05.836596 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-03 00:53:05.836649 | orchestrator | Wednesday 03 September 2025 00:44:49 +0000 (0:00:00.739) 0:02:26.466 *** 2025-09-03 00:53:05.836663 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.836670 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.836677 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.836684 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.836690 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.836697 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.836704 | orchestrator | 2025-09-03 00:53:05.836711 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-03 00:53:05.836718 | orchestrator | Wednesday 03 September 2025 00:44:50 +0000 (0:00:00.601) 0:02:27.067 *** 2025-09-03 00:53:05.836724 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.836731 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.836738 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.836745 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.836751 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.836758 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.836765 | orchestrator | 2025-09-03 00:53:05.836772 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-03 00:53:05.836778 | orchestrator | Wednesday 03 September 2025 00:44:51 +0000 (0:00:00.532) 0:02:27.599 *** 2025-09-03 00:53:05.836785 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.836792 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.836798 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.836805 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.836812 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.836818 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.836825 | orchestrator | 2025-09-03 00:53:05.836832 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-03 00:53:05.836839 | orchestrator | Wednesday 03 September 2025 00:44:51 +0000 (0:00:00.633) 0:02:28.233 *** 2025-09-03 00:53:05.836845 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.836852 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.836859 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.836865 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.836872 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.836879 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.836885 | orchestrator | 2025-09-03 00:53:05.836892 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-03 00:53:05.836899 | orchestrator | Wednesday 03 September 2025 00:44:52 +0000 (0:00:00.475) 0:02:28.708 *** 2025-09-03 00:53:05.836906 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.836912 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.836919 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.836926 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.836933 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.836940 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.836946 | orchestrator | 2025-09-03 00:53:05.836953 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-03 00:53:05.836977 | orchestrator | Wednesday 03 September 2025 00:44:52 +0000 (0:00:00.571) 0:02:29.280 *** 2025-09-03 00:53:05.836984 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.836991 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.836998 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.837005 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.837012 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.837018 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.837025 | orchestrator | 2025-09-03 00:53:05.837032 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-03 00:53:05.837038 | orchestrator | Wednesday 03 September 2025 00:44:53 +0000 (0:00:01.002) 0:02:30.282 *** 2025-09-03 00:53:05.837045 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:53:05.837052 | orchestrator | 2025-09-03 00:53:05.837059 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-03 00:53:05.837065 | orchestrator | Wednesday 03 September 2025 00:44:54 +0000 (0:00:00.868) 0:02:31.151 *** 2025-09-03 00:53:05.837072 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-03 00:53:05.837079 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-03 00:53:05.837085 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-03 00:53:05.837092 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-03 00:53:05.837099 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-03 00:53:05.837106 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-03 00:53:05.837113 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-03 00:53:05.837119 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-03 00:53:05.837126 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-03 00:53:05.837132 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-03 00:53:05.837139 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-03 00:53:05.837146 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-03 00:53:05.837153 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-03 00:53:05.837159 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-03 00:53:05.837166 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-03 00:53:05.837173 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-03 00:53:05.837179 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-03 00:53:05.837189 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-03 00:53:05.837196 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-03 00:53:05.837203 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-03 00:53:05.837226 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-03 00:53:05.837234 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-03 00:53:05.837240 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-03 00:53:05.837247 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-03 00:53:05.837254 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-03 00:53:05.837260 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-03 00:53:05.837267 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-03 00:53:05.837274 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-03 00:53:05.837281 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-03 00:53:05.837287 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-03 00:53:05.837294 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-03 00:53:05.837306 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-03 00:53:05.837313 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-03 00:53:05.837319 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-03 00:53:05.837326 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-03 00:53:05.837333 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-03 00:53:05.837340 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-03 00:53:05.837346 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-03 00:53:05.837353 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-03 00:53:05.837360 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-03 00:53:05.837366 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-03 00:53:05.837373 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-03 00:53:05.837380 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-03 00:53:05.837386 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-03 00:53:05.837393 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-03 00:53:05.837400 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-03 00:53:05.837406 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-03 00:53:05.837413 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-03 00:53:05.837420 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-03 00:53:05.837426 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-03 00:53:05.837433 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-03 00:53:05.837440 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-03 00:53:05.837446 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-03 00:53:05.837453 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-03 00:53:05.837460 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-03 00:53:05.837466 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-03 00:53:05.837473 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-03 00:53:05.837480 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-03 00:53:05.837486 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-03 00:53:05.837493 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-03 00:53:05.837500 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-03 00:53:05.837506 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-03 00:53:05.837513 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-03 00:53:05.837520 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-03 00:53:05.837526 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-03 00:53:05.837533 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-03 00:53:05.837540 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-03 00:53:05.837546 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-03 00:53:05.837553 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-03 00:53:05.837560 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-03 00:53:05.837566 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-03 00:53:05.837573 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-03 00:53:05.837584 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-03 00:53:05.837591 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-03 00:53:05.837597 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-03 00:53:05.837607 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-03 00:53:05.837614 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-03 00:53:05.837621 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-03 00:53:05.837643 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-03 00:53:05.837650 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-03 00:53:05.837657 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-03 00:53:05.837664 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-03 00:53:05.837671 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-03 00:53:05.837677 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-03 00:53:05.837684 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-03 00:53:05.837691 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-03 00:53:05.837697 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-03 00:53:05.837704 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-03 00:53:05.837711 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-03 00:53:05.837717 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-03 00:53:05.837724 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-03 00:53:05.837731 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-03 00:53:05.837737 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-03 00:53:05.837744 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-03 00:53:05.837751 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-03 00:53:05.837757 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-03 00:53:05.837764 | orchestrator | 2025-09-03 00:53:05.837771 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-03 00:53:05.837777 | orchestrator | Wednesday 03 September 2025 00:45:01 +0000 (0:00:06.687) 0:02:37.838 *** 2025-09-03 00:53:05.837784 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.837791 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.837798 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.837805 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.837812 | orchestrator | 2025-09-03 00:53:05.837818 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-03 00:53:05.837825 | orchestrator | Wednesday 03 September 2025 00:45:02 +0000 (0:00:00.905) 0:02:38.744 *** 2025-09-03 00:53:05.837832 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-03 00:53:05.837839 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-03 00:53:05.837846 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-03 00:53:05.837853 | orchestrator | 2025-09-03 00:53:05.837860 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-03 00:53:05.837866 | orchestrator | Wednesday 03 September 2025 00:45:02 +0000 (0:00:00.765) 0:02:39.509 *** 2025-09-03 00:53:05.837873 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-03 00:53:05.837889 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-03 00:53:05.837895 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-03 00:53:05.837902 | orchestrator | 2025-09-03 00:53:05.837909 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-03 00:53:05.837916 | orchestrator | Wednesday 03 September 2025 00:45:04 +0000 (0:00:01.219) 0:02:40.729 *** 2025-09-03 00:53:05.837923 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.837929 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.837936 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.837943 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.837950 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.837956 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.837994 | orchestrator | 2025-09-03 00:53:05.838002 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-03 00:53:05.838009 | orchestrator | Wednesday 03 September 2025 00:45:04 +0000 (0:00:00.532) 0:02:41.261 *** 2025-09-03 00:53:05.838036 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.838044 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.838052 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.838058 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.838065 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.838072 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.838079 | orchestrator | 2025-09-03 00:53:05.838085 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-03 00:53:05.838092 | orchestrator | Wednesday 03 September 2025 00:45:05 +0000 (0:00:00.833) 0:02:42.095 *** 2025-09-03 00:53:05.838099 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.838105 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.838112 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.838118 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.838129 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.838136 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.838143 | orchestrator | 2025-09-03 00:53:05.838149 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-03 00:53:05.838156 | orchestrator | Wednesday 03 September 2025 00:45:06 +0000 (0:00:00.581) 0:02:42.677 *** 2025-09-03 00:53:05.838179 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.838186 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.838193 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.838200 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.838207 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.838214 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.838221 | orchestrator | 2025-09-03 00:53:05.838228 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-03 00:53:05.838235 | orchestrator | Wednesday 03 September 2025 00:45:06 +0000 (0:00:00.681) 0:02:43.358 *** 2025-09-03 00:53:05.838242 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.838249 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.838255 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.838262 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.838269 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.838276 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.838283 | orchestrator | 2025-09-03 00:53:05.838290 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-03 00:53:05.838296 | orchestrator | Wednesday 03 September 2025 00:45:07 +0000 (0:00:00.764) 0:02:44.122 *** 2025-09-03 00:53:05.838303 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.838309 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.838320 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.838326 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.838333 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.838339 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.838345 | orchestrator | 2025-09-03 00:53:05.838352 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-03 00:53:05.838358 | orchestrator | Wednesday 03 September 2025 00:45:08 +0000 (0:00:00.744) 0:02:44.866 *** 2025-09-03 00:53:05.838364 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.838370 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.838377 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.838383 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.838389 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.838396 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.838402 | orchestrator | 2025-09-03 00:53:05.838408 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-03 00:53:05.838415 | orchestrator | Wednesday 03 September 2025 00:45:09 +0000 (0:00:00.659) 0:02:45.526 *** 2025-09-03 00:53:05.838421 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.838427 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.838433 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.838439 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.838446 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.838452 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.838458 | orchestrator | 2025-09-03 00:53:05.838465 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-03 00:53:05.838471 | orchestrator | Wednesday 03 September 2025 00:45:09 +0000 (0:00:00.454) 0:02:45.980 *** 2025-09-03 00:53:05.838477 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.838484 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.838490 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.838496 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.838503 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.838509 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.838515 | orchestrator | 2025-09-03 00:53:05.838522 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-03 00:53:05.838528 | orchestrator | Wednesday 03 September 2025 00:45:12 +0000 (0:00:03.398) 0:02:49.378 *** 2025-09-03 00:53:05.838534 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.838540 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.838547 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.838553 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.838560 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.838566 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.838572 | orchestrator | 2025-09-03 00:53:05.838579 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-03 00:53:05.838585 | orchestrator | Wednesday 03 September 2025 00:45:13 +0000 (0:00:00.616) 0:02:49.995 *** 2025-09-03 00:53:05.838591 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.838598 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.838604 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.838610 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.838617 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.838623 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.838629 | orchestrator | 2025-09-03 00:53:05.838635 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-03 00:53:05.838642 | orchestrator | Wednesday 03 September 2025 00:45:14 +0000 (0:00:00.652) 0:02:50.648 *** 2025-09-03 00:53:05.838648 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.838654 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.838661 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.838667 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.838673 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.838683 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.838690 | orchestrator | 2025-09-03 00:53:05.838696 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-03 00:53:05.838702 | orchestrator | Wednesday 03 September 2025 00:45:14 +0000 (0:00:00.763) 0:02:51.411 *** 2025-09-03 00:53:05.838709 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-03 00:53:05.838715 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-03 00:53:05.838725 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-03 00:53:05.838731 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.838738 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.838744 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.838750 | orchestrator | 2025-09-03 00:53:05.838771 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-03 00:53:05.838778 | orchestrator | Wednesday 03 September 2025 00:45:15 +0000 (0:00:00.993) 0:02:52.404 *** 2025-09-03 00:53:05.838786 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-03 00:53:05.838794 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-03 00:53:05.838802 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.838808 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-03 00:53:05.838815 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-03 00:53:05.838822 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.838828 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-03 00:53:05.838835 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-03 00:53:05.838841 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.838847 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.838854 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.838860 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.838866 | orchestrator | 2025-09-03 00:53:05.838872 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-03 00:53:05.838879 | orchestrator | Wednesday 03 September 2025 00:45:16 +0000 (0:00:00.619) 0:02:53.023 *** 2025-09-03 00:53:05.838885 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.838891 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.838902 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.838908 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.838914 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.838920 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.838927 | orchestrator | 2025-09-03 00:53:05.838933 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-03 00:53:05.838939 | orchestrator | Wednesday 03 September 2025 00:45:17 +0000 (0:00:00.685) 0:02:53.709 *** 2025-09-03 00:53:05.838946 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.838952 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.838958 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.838977 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.838983 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.838989 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.838996 | orchestrator | 2025-09-03 00:53:05.839002 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-03 00:53:05.839008 | orchestrator | Wednesday 03 September 2025 00:45:17 +0000 (0:00:00.516) 0:02:54.226 *** 2025-09-03 00:53:05.839015 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.839022 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.839032 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.839042 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.839053 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.839062 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.839072 | orchestrator | 2025-09-03 00:53:05.839083 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-03 00:53:05.839093 | orchestrator | Wednesday 03 September 2025 00:45:18 +0000 (0:00:00.722) 0:02:54.949 *** 2025-09-03 00:53:05.839103 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.839115 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.839121 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.839127 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.839134 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.839144 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.839150 | orchestrator | 2025-09-03 00:53:05.839156 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-03 00:53:05.839162 | orchestrator | Wednesday 03 September 2025 00:45:19 +0000 (0:00:00.608) 0:02:55.557 *** 2025-09-03 00:53:05.839169 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.839191 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.839198 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.839204 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.839210 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.839216 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.839223 | orchestrator | 2025-09-03 00:53:05.839229 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-03 00:53:05.839235 | orchestrator | Wednesday 03 September 2025 00:45:19 +0000 (0:00:00.881) 0:02:56.438 *** 2025-09-03 00:53:05.839242 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.839248 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.839254 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.839260 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.839267 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.839273 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.839279 | orchestrator | 2025-09-03 00:53:05.839285 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-03 00:53:05.839292 | orchestrator | Wednesday 03 September 2025 00:45:20 +0000 (0:00:01.031) 0:02:57.470 *** 2025-09-03 00:53:05.839298 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-03 00:53:05.839304 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-03 00:53:05.839311 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-03 00:53:05.839334 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.839340 | orchestrator | 2025-09-03 00:53:05.839346 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-03 00:53:05.839353 | orchestrator | Wednesday 03 September 2025 00:45:21 +0000 (0:00:00.624) 0:02:58.094 *** 2025-09-03 00:53:05.839359 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-03 00:53:05.839365 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-03 00:53:05.839371 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-03 00:53:05.839378 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.839384 | orchestrator | 2025-09-03 00:53:05.839390 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-03 00:53:05.839397 | orchestrator | Wednesday 03 September 2025 00:45:22 +0000 (0:00:00.462) 0:02:58.556 *** 2025-09-03 00:53:05.839403 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-03 00:53:05.839409 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-03 00:53:05.839415 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-03 00:53:05.839422 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.839428 | orchestrator | 2025-09-03 00:53:05.839434 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-03 00:53:05.839441 | orchestrator | Wednesday 03 September 2025 00:45:22 +0000 (0:00:00.690) 0:02:59.246 *** 2025-09-03 00:53:05.839447 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.839453 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.839460 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.839466 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.839472 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.839479 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.839485 | orchestrator | 2025-09-03 00:53:05.839491 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-03 00:53:05.839497 | orchestrator | Wednesday 03 September 2025 00:45:23 +0000 (0:00:00.789) 0:03:00.036 *** 2025-09-03 00:53:05.839504 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-03 00:53:05.839510 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-03 00:53:05.839516 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-03 00:53:05.839523 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.839529 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-03 00:53:05.839535 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.839542 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-03 00:53:05.839548 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.839554 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-03 00:53:05.839560 | orchestrator | 2025-09-03 00:53:05.839567 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-03 00:53:05.839573 | orchestrator | Wednesday 03 September 2025 00:45:25 +0000 (0:00:01.910) 0:03:01.946 *** 2025-09-03 00:53:05.839579 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.839585 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.839592 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.839598 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.839604 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.839610 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.839617 | orchestrator | 2025-09-03 00:53:05.839623 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-03 00:53:05.839629 | orchestrator | Wednesday 03 September 2025 00:45:28 +0000 (0:00:02.909) 0:03:04.856 *** 2025-09-03 00:53:05.839635 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.839642 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.839648 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.839654 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.839660 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.839667 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.839680 | orchestrator | 2025-09-03 00:53:05.839686 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-03 00:53:05.839693 | orchestrator | Wednesday 03 September 2025 00:45:29 +0000 (0:00:01.605) 0:03:06.461 *** 2025-09-03 00:53:05.839699 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.839705 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.839711 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.839718 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-09-03 00:53:05.839724 | orchestrator | 2025-09-03 00:53:05.839734 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-03 00:53:05.839740 | orchestrator | Wednesday 03 September 2025 00:45:30 +0000 (0:00:00.861) 0:03:07.323 *** 2025-09-03 00:53:05.839746 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.839753 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.839759 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.839765 | orchestrator | 2025-09-03 00:53:05.839786 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-03 00:53:05.839793 | orchestrator | Wednesday 03 September 2025 00:45:31 +0000 (0:00:00.261) 0:03:07.585 *** 2025-09-03 00:53:05.839799 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.839805 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.839811 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.839818 | orchestrator | 2025-09-03 00:53:05.839824 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-03 00:53:05.839830 | orchestrator | Wednesday 03 September 2025 00:45:32 +0000 (0:00:01.254) 0:03:08.839 *** 2025-09-03 00:53:05.839836 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-03 00:53:05.839843 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-03 00:53:05.839849 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-03 00:53:05.839855 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.839861 | orchestrator | 2025-09-03 00:53:05.839867 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-03 00:53:05.839874 | orchestrator | Wednesday 03 September 2025 00:45:33 +0000 (0:00:00.755) 0:03:09.594 *** 2025-09-03 00:53:05.839880 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.839886 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.839892 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.839899 | orchestrator | 2025-09-03 00:53:05.839905 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-03 00:53:05.839911 | orchestrator | Wednesday 03 September 2025 00:45:33 +0000 (0:00:00.305) 0:03:09.900 *** 2025-09-03 00:53:05.839917 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.839923 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.839930 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.839936 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.839942 | orchestrator | 2025-09-03 00:53:05.839949 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-03 00:53:05.839955 | orchestrator | Wednesday 03 September 2025 00:45:34 +0000 (0:00:01.055) 0:03:10.955 *** 2025-09-03 00:53:05.839991 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-03 00:53:05.839998 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-03 00:53:05.840005 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-03 00:53:05.840011 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.840017 | orchestrator | 2025-09-03 00:53:05.840024 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-03 00:53:05.840030 | orchestrator | Wednesday 03 September 2025 00:45:34 +0000 (0:00:00.338) 0:03:11.293 *** 2025-09-03 00:53:05.840036 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.840043 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.840054 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.840060 | orchestrator | 2025-09-03 00:53:05.840066 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-03 00:53:05.840073 | orchestrator | Wednesday 03 September 2025 00:45:35 +0000 (0:00:00.429) 0:03:11.723 *** 2025-09-03 00:53:05.840079 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.840085 | orchestrator | 2025-09-03 00:53:05.840092 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-03 00:53:05.840098 | orchestrator | Wednesday 03 September 2025 00:45:35 +0000 (0:00:00.188) 0:03:11.912 *** 2025-09-03 00:53:05.840104 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.840111 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.840117 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.840124 | orchestrator | 2025-09-03 00:53:05.840130 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-03 00:53:05.840136 | orchestrator | Wednesday 03 September 2025 00:45:35 +0000 (0:00:00.340) 0:03:12.253 *** 2025-09-03 00:53:05.840142 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.840149 | orchestrator | 2025-09-03 00:53:05.840155 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-03 00:53:05.840161 | orchestrator | Wednesday 03 September 2025 00:45:35 +0000 (0:00:00.196) 0:03:12.449 *** 2025-09-03 00:53:05.840167 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.840174 | orchestrator | 2025-09-03 00:53:05.840180 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-03 00:53:05.840186 | orchestrator | Wednesday 03 September 2025 00:45:36 +0000 (0:00:00.216) 0:03:12.666 *** 2025-09-03 00:53:05.840193 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.840199 | orchestrator | 2025-09-03 00:53:05.840205 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-03 00:53:05.840211 | orchestrator | Wednesday 03 September 2025 00:45:36 +0000 (0:00:00.122) 0:03:12.788 *** 2025-09-03 00:53:05.840218 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.840224 | orchestrator | 2025-09-03 00:53:05.840230 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-03 00:53:05.840236 | orchestrator | Wednesday 03 September 2025 00:45:36 +0000 (0:00:00.179) 0:03:12.967 *** 2025-09-03 00:53:05.840243 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.840249 | orchestrator | 2025-09-03 00:53:05.840255 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-03 00:53:05.840261 | orchestrator | Wednesday 03 September 2025 00:45:36 +0000 (0:00:00.183) 0:03:13.151 *** 2025-09-03 00:53:05.840267 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-03 00:53:05.840272 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-03 00:53:05.840278 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-03 00:53:05.840283 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.840289 | orchestrator | 2025-09-03 00:53:05.840297 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-03 00:53:05.840303 | orchestrator | Wednesday 03 September 2025 00:45:36 +0000 (0:00:00.340) 0:03:13.492 *** 2025-09-03 00:53:05.840309 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.840329 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.840335 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.840341 | orchestrator | 2025-09-03 00:53:05.840347 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-03 00:53:05.840352 | orchestrator | Wednesday 03 September 2025 00:45:37 +0000 (0:00:00.429) 0:03:13.921 *** 2025-09-03 00:53:05.840358 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.840363 | orchestrator | 2025-09-03 00:53:05.840369 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-03 00:53:05.840374 | orchestrator | Wednesday 03 September 2025 00:45:37 +0000 (0:00:00.180) 0:03:14.101 *** 2025-09-03 00:53:05.840380 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.840389 | orchestrator | 2025-09-03 00:53:05.840394 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-03 00:53:05.840400 | orchestrator | Wednesday 03 September 2025 00:45:37 +0000 (0:00:00.201) 0:03:14.303 *** 2025-09-03 00:53:05.840405 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.840411 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.840416 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.840422 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.840428 | orchestrator | 2025-09-03 00:53:05.840433 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-03 00:53:05.840439 | orchestrator | Wednesday 03 September 2025 00:45:38 +0000 (0:00:00.757) 0:03:15.060 *** 2025-09-03 00:53:05.840444 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.840450 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.840455 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.840461 | orchestrator | 2025-09-03 00:53:05.840466 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-03 00:53:05.840472 | orchestrator | Wednesday 03 September 2025 00:45:39 +0000 (0:00:00.603) 0:03:15.664 *** 2025-09-03 00:53:05.840477 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.840483 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.840488 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.840494 | orchestrator | 2025-09-03 00:53:05.840499 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-03 00:53:05.840505 | orchestrator | Wednesday 03 September 2025 00:45:40 +0000 (0:00:01.718) 0:03:17.382 *** 2025-09-03 00:53:05.840511 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-03 00:53:05.840516 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-03 00:53:05.840522 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-03 00:53:05.840527 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.840533 | orchestrator | 2025-09-03 00:53:05.840538 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-03 00:53:05.840544 | orchestrator | Wednesday 03 September 2025 00:45:41 +0000 (0:00:00.675) 0:03:18.058 *** 2025-09-03 00:53:05.840549 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.840555 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.840560 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.840566 | orchestrator | 2025-09-03 00:53:05.840571 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-03 00:53:05.840577 | orchestrator | Wednesday 03 September 2025 00:45:41 +0000 (0:00:00.366) 0:03:18.424 *** 2025-09-03 00:53:05.840582 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.840588 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.840593 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.840599 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.840604 | orchestrator | 2025-09-03 00:53:05.840610 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-03 00:53:05.840615 | orchestrator | Wednesday 03 September 2025 00:45:43 +0000 (0:00:01.217) 0:03:19.641 *** 2025-09-03 00:53:05.840621 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.840626 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.840632 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.840637 | orchestrator | 2025-09-03 00:53:05.840643 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-03 00:53:05.840648 | orchestrator | Wednesday 03 September 2025 00:45:43 +0000 (0:00:00.347) 0:03:19.989 *** 2025-09-03 00:53:05.840654 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.840659 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.840665 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.840670 | orchestrator | 2025-09-03 00:53:05.840676 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-03 00:53:05.840687 | orchestrator | Wednesday 03 September 2025 00:45:45 +0000 (0:00:01.948) 0:03:21.937 *** 2025-09-03 00:53:05.840693 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-03 00:53:05.840698 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-03 00:53:05.840704 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-03 00:53:05.840709 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.840715 | orchestrator | 2025-09-03 00:53:05.840720 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-03 00:53:05.840726 | orchestrator | Wednesday 03 September 2025 00:45:46 +0000 (0:00:00.680) 0:03:22.618 *** 2025-09-03 00:53:05.840731 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.840737 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.840742 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.840748 | orchestrator | 2025-09-03 00:53:05.840753 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-03 00:53:05.840759 | orchestrator | Wednesday 03 September 2025 00:45:46 +0000 (0:00:00.419) 0:03:23.038 *** 2025-09-03 00:53:05.840764 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.840770 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.840778 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.840783 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.840789 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.840794 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.840800 | orchestrator | 2025-09-03 00:53:05.840806 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-03 00:53:05.840826 | orchestrator | Wednesday 03 September 2025 00:45:47 +0000 (0:00:00.679) 0:03:23.717 *** 2025-09-03 00:53:05.840832 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.840838 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.840843 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.840849 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:53:05.840854 | orchestrator | 2025-09-03 00:53:05.840860 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-03 00:53:05.840866 | orchestrator | Wednesday 03 September 2025 00:45:48 +0000 (0:00:00.888) 0:03:24.606 *** 2025-09-03 00:53:05.840871 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.840877 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.840882 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.840888 | orchestrator | 2025-09-03 00:53:05.840893 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-03 00:53:05.840899 | orchestrator | Wednesday 03 September 2025 00:45:48 +0000 (0:00:00.292) 0:03:24.898 *** 2025-09-03 00:53:05.840904 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.840910 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.840915 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.840920 | orchestrator | 2025-09-03 00:53:05.840926 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-03 00:53:05.840931 | orchestrator | Wednesday 03 September 2025 00:45:49 +0000 (0:00:01.422) 0:03:26.321 *** 2025-09-03 00:53:05.840937 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-03 00:53:05.840942 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-03 00:53:05.840948 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-03 00:53:05.840953 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.840959 | orchestrator | 2025-09-03 00:53:05.840975 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-03 00:53:05.840980 | orchestrator | Wednesday 03 September 2025 00:45:50 +0000 (0:00:00.604) 0:03:26.926 *** 2025-09-03 00:53:05.840986 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.840991 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.841001 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.841006 | orchestrator | 2025-09-03 00:53:05.841012 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-03 00:53:05.841018 | orchestrator | 2025-09-03 00:53:05.841023 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-03 00:53:05.841029 | orchestrator | Wednesday 03 September 2025 00:45:50 +0000 (0:00:00.583) 0:03:27.509 *** 2025-09-03 00:53:05.841034 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:53:05.841040 | orchestrator | 2025-09-03 00:53:05.841045 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-03 00:53:05.841051 | orchestrator | Wednesday 03 September 2025 00:45:51 +0000 (0:00:00.709) 0:03:28.219 *** 2025-09-03 00:53:05.841056 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:53:05.841062 | orchestrator | 2025-09-03 00:53:05.841067 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-03 00:53:05.841073 | orchestrator | Wednesday 03 September 2025 00:45:52 +0000 (0:00:00.633) 0:03:28.852 *** 2025-09-03 00:53:05.841078 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.841084 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.841090 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.841095 | orchestrator | 2025-09-03 00:53:05.841101 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-03 00:53:05.841106 | orchestrator | Wednesday 03 September 2025 00:45:53 +0000 (0:00:01.047) 0:03:29.899 *** 2025-09-03 00:53:05.841111 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.841117 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.841122 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.841128 | orchestrator | 2025-09-03 00:53:05.841133 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-03 00:53:05.841139 | orchestrator | Wednesday 03 September 2025 00:45:53 +0000 (0:00:00.348) 0:03:30.248 *** 2025-09-03 00:53:05.841144 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.841150 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.841156 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.841161 | orchestrator | 2025-09-03 00:53:05.841167 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-03 00:53:05.841172 | orchestrator | Wednesday 03 September 2025 00:45:54 +0000 (0:00:00.811) 0:03:31.060 *** 2025-09-03 00:53:05.841178 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.841183 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.841189 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.841194 | orchestrator | 2025-09-03 00:53:05.841200 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-03 00:53:05.841205 | orchestrator | Wednesday 03 September 2025 00:45:54 +0000 (0:00:00.448) 0:03:31.509 *** 2025-09-03 00:53:05.841211 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.841216 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.841222 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.841227 | orchestrator | 2025-09-03 00:53:05.841233 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-03 00:53:05.841239 | orchestrator | Wednesday 03 September 2025 00:45:55 +0000 (0:00:00.863) 0:03:32.372 *** 2025-09-03 00:53:05.841244 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.841250 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.841255 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.841261 | orchestrator | 2025-09-03 00:53:05.841269 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-03 00:53:05.841275 | orchestrator | Wednesday 03 September 2025 00:45:56 +0000 (0:00:00.315) 0:03:32.687 *** 2025-09-03 00:53:05.841280 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.841286 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.841291 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.841301 | orchestrator | 2025-09-03 00:53:05.841321 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-03 00:53:05.841327 | orchestrator | Wednesday 03 September 2025 00:45:56 +0000 (0:00:00.630) 0:03:33.318 *** 2025-09-03 00:53:05.841332 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.841338 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.841343 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.841349 | orchestrator | 2025-09-03 00:53:05.841354 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-03 00:53:05.841360 | orchestrator | Wednesday 03 September 2025 00:45:57 +0000 (0:00:00.697) 0:03:34.015 *** 2025-09-03 00:53:05.841365 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.841371 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.841377 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.841382 | orchestrator | 2025-09-03 00:53:05.841388 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-03 00:53:05.841393 | orchestrator | Wednesday 03 September 2025 00:45:58 +0000 (0:00:00.867) 0:03:34.882 *** 2025-09-03 00:53:05.841398 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.841404 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.841409 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.841415 | orchestrator | 2025-09-03 00:53:05.841420 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-03 00:53:05.841426 | orchestrator | Wednesday 03 September 2025 00:45:58 +0000 (0:00:00.377) 0:03:35.260 *** 2025-09-03 00:53:05.841431 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.841437 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.841442 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.841448 | orchestrator | 2025-09-03 00:53:05.841453 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-03 00:53:05.841459 | orchestrator | Wednesday 03 September 2025 00:45:59 +0000 (0:00:00.625) 0:03:35.885 *** 2025-09-03 00:53:05.841464 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.841470 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.841475 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.841481 | orchestrator | 2025-09-03 00:53:05.841486 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-03 00:53:05.841492 | orchestrator | Wednesday 03 September 2025 00:46:00 +0000 (0:00:00.710) 0:03:36.596 *** 2025-09-03 00:53:05.841497 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.841503 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.841508 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.841514 | orchestrator | 2025-09-03 00:53:05.841519 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-03 00:53:05.841525 | orchestrator | Wednesday 03 September 2025 00:46:00 +0000 (0:00:00.292) 0:03:36.888 *** 2025-09-03 00:53:05.841530 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.841536 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.841541 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.841546 | orchestrator | 2025-09-03 00:53:05.841552 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-03 00:53:05.841557 | orchestrator | Wednesday 03 September 2025 00:46:00 +0000 (0:00:00.534) 0:03:37.423 *** 2025-09-03 00:53:05.841563 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.841568 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.841574 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.841579 | orchestrator | 2025-09-03 00:53:05.841584 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-03 00:53:05.841590 | orchestrator | Wednesday 03 September 2025 00:46:01 +0000 (0:00:00.615) 0:03:38.039 *** 2025-09-03 00:53:05.841595 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.841601 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.841606 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.841612 | orchestrator | 2025-09-03 00:53:05.841617 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-03 00:53:05.841627 | orchestrator | Wednesday 03 September 2025 00:46:01 +0000 (0:00:00.296) 0:03:38.335 *** 2025-09-03 00:53:05.841632 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.841638 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.841643 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.841649 | orchestrator | 2025-09-03 00:53:05.841654 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-03 00:53:05.841660 | orchestrator | Wednesday 03 September 2025 00:46:02 +0000 (0:00:00.365) 0:03:38.701 *** 2025-09-03 00:53:05.841665 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.841671 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.841676 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.841682 | orchestrator | 2025-09-03 00:53:05.841687 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-03 00:53:05.841693 | orchestrator | Wednesday 03 September 2025 00:46:02 +0000 (0:00:00.320) 0:03:39.021 *** 2025-09-03 00:53:05.841698 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.841704 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.841709 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.841714 | orchestrator | 2025-09-03 00:53:05.841720 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-03 00:53:05.841725 | orchestrator | Wednesday 03 September 2025 00:46:03 +0000 (0:00:00.771) 0:03:39.793 *** 2025-09-03 00:53:05.841731 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.841736 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.841742 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.841747 | orchestrator | 2025-09-03 00:53:05.841752 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-03 00:53:05.841758 | orchestrator | Wednesday 03 September 2025 00:46:03 +0000 (0:00:00.312) 0:03:40.106 *** 2025-09-03 00:53:05.841763 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:53:05.841769 | orchestrator | 2025-09-03 00:53:05.841774 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-03 00:53:05.841782 | orchestrator | Wednesday 03 September 2025 00:46:04 +0000 (0:00:00.538) 0:03:40.645 *** 2025-09-03 00:53:05.841788 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.841793 | orchestrator | 2025-09-03 00:53:05.841799 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-03 00:53:05.841819 | orchestrator | Wednesday 03 September 2025 00:46:04 +0000 (0:00:00.276) 0:03:40.921 *** 2025-09-03 00:53:05.841825 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-03 00:53:05.841830 | orchestrator | 2025-09-03 00:53:05.841836 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-03 00:53:05.841841 | orchestrator | Wednesday 03 September 2025 00:46:05 +0000 (0:00:01.074) 0:03:41.995 *** 2025-09-03 00:53:05.841847 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.841852 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.841858 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.841863 | orchestrator | 2025-09-03 00:53:05.841869 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-03 00:53:05.841874 | orchestrator | Wednesday 03 September 2025 00:46:05 +0000 (0:00:00.300) 0:03:42.296 *** 2025-09-03 00:53:05.841880 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.841885 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.841891 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.841896 | orchestrator | 2025-09-03 00:53:05.841902 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-03 00:53:05.841907 | orchestrator | Wednesday 03 September 2025 00:46:06 +0000 (0:00:00.372) 0:03:42.669 *** 2025-09-03 00:53:05.841913 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.841918 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.841924 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.841929 | orchestrator | 2025-09-03 00:53:05.841935 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-03 00:53:05.841944 | orchestrator | Wednesday 03 September 2025 00:46:07 +0000 (0:00:01.271) 0:03:43.941 *** 2025-09-03 00:53:05.841949 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.841955 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.841961 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.841977 | orchestrator | 2025-09-03 00:53:05.841982 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-03 00:53:05.841988 | orchestrator | Wednesday 03 September 2025 00:46:08 +0000 (0:00:01.115) 0:03:45.056 *** 2025-09-03 00:53:05.841993 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.841999 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.842004 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.842010 | orchestrator | 2025-09-03 00:53:05.842059 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-03 00:53:05.842067 | orchestrator | Wednesday 03 September 2025 00:46:09 +0000 (0:00:00.676) 0:03:45.733 *** 2025-09-03 00:53:05.842073 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.842078 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.842084 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.842090 | orchestrator | 2025-09-03 00:53:05.842095 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-03 00:53:05.842101 | orchestrator | Wednesday 03 September 2025 00:46:09 +0000 (0:00:00.606) 0:03:46.340 *** 2025-09-03 00:53:05.842106 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.842112 | orchestrator | 2025-09-03 00:53:05.842117 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-03 00:53:05.842123 | orchestrator | Wednesday 03 September 2025 00:46:11 +0000 (0:00:01.309) 0:03:47.650 *** 2025-09-03 00:53:05.842128 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.842134 | orchestrator | 2025-09-03 00:53:05.842140 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-03 00:53:05.842145 | orchestrator | Wednesday 03 September 2025 00:46:12 +0000 (0:00:01.025) 0:03:48.675 *** 2025-09-03 00:53:05.842151 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-03 00:53:05.842156 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:53:05.842162 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:53:05.842167 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-09-03 00:53:05.842173 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-03 00:53:05.842178 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-03 00:53:05.842184 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-03 00:53:05.842190 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-03 00:53:05.842195 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-03 00:53:05.842201 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-09-03 00:53:05.842206 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-03 00:53:05.842212 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-03 00:53:05.842217 | orchestrator | 2025-09-03 00:53:05.842223 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-03 00:53:05.842229 | orchestrator | Wednesday 03 September 2025 00:46:15 +0000 (0:00:03.175) 0:03:51.851 *** 2025-09-03 00:53:05.842234 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.842240 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.842245 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.842251 | orchestrator | 2025-09-03 00:53:05.842256 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-03 00:53:05.842262 | orchestrator | Wednesday 03 September 2025 00:46:16 +0000 (0:00:01.332) 0:03:53.183 *** 2025-09-03 00:53:05.842267 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.842273 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.842283 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.842288 | orchestrator | 2025-09-03 00:53:05.842294 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-03 00:53:05.842299 | orchestrator | Wednesday 03 September 2025 00:46:16 +0000 (0:00:00.312) 0:03:53.496 *** 2025-09-03 00:53:05.842305 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.842310 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.842316 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.842321 | orchestrator | 2025-09-03 00:53:05.842327 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-03 00:53:05.842336 | orchestrator | Wednesday 03 September 2025 00:46:17 +0000 (0:00:00.228) 0:03:53.725 *** 2025-09-03 00:53:05.842342 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.842347 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.842353 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.842359 | orchestrator | 2025-09-03 00:53:05.842379 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-03 00:53:05.842385 | orchestrator | Wednesday 03 September 2025 00:46:18 +0000 (0:00:01.422) 0:03:55.147 *** 2025-09-03 00:53:05.842390 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.842396 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.842401 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.842407 | orchestrator | 2025-09-03 00:53:05.842412 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-03 00:53:05.842418 | orchestrator | Wednesday 03 September 2025 00:46:20 +0000 (0:00:01.385) 0:03:56.533 *** 2025-09-03 00:53:05.842423 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.842429 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.842435 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.842440 | orchestrator | 2025-09-03 00:53:05.842446 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-03 00:53:05.842451 | orchestrator | Wednesday 03 September 2025 00:46:20 +0000 (0:00:00.355) 0:03:56.889 *** 2025-09-03 00:53:05.842456 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:53:05.842462 | orchestrator | 2025-09-03 00:53:05.842468 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-03 00:53:05.842473 | orchestrator | Wednesday 03 September 2025 00:46:20 +0000 (0:00:00.529) 0:03:57.418 *** 2025-09-03 00:53:05.842479 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.842484 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.842490 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.842495 | orchestrator | 2025-09-03 00:53:05.842501 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-03 00:53:05.842506 | orchestrator | Wednesday 03 September 2025 00:46:21 +0000 (0:00:00.503) 0:03:57.922 *** 2025-09-03 00:53:05.842511 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.842517 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.842522 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.842528 | orchestrator | 2025-09-03 00:53:05.842533 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-03 00:53:05.842539 | orchestrator | Wednesday 03 September 2025 00:46:21 +0000 (0:00:00.290) 0:03:58.213 *** 2025-09-03 00:53:05.842544 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:53:05.842550 | orchestrator | 2025-09-03 00:53:05.842555 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-03 00:53:05.842561 | orchestrator | Wednesday 03 September 2025 00:46:22 +0000 (0:00:00.517) 0:03:58.730 *** 2025-09-03 00:53:05.842566 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.842572 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.842577 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.842583 | orchestrator | 2025-09-03 00:53:05.842588 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-03 00:53:05.842599 | orchestrator | Wednesday 03 September 2025 00:46:24 +0000 (0:00:02.064) 0:04:00.794 *** 2025-09-03 00:53:05.842605 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.842610 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.842616 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.842621 | orchestrator | 2025-09-03 00:53:05.842627 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-03 00:53:05.842632 | orchestrator | Wednesday 03 September 2025 00:46:25 +0000 (0:00:01.493) 0:04:02.287 *** 2025-09-03 00:53:05.842638 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.842643 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.842648 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.842654 | orchestrator | 2025-09-03 00:53:05.842659 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-03 00:53:05.842665 | orchestrator | Wednesday 03 September 2025 00:46:27 +0000 (0:00:01.701) 0:04:03.989 *** 2025-09-03 00:53:05.842670 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.842676 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.842681 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.842686 | orchestrator | 2025-09-03 00:53:05.842692 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-03 00:53:05.842697 | orchestrator | Wednesday 03 September 2025 00:46:29 +0000 (0:00:01.814) 0:04:05.803 *** 2025-09-03 00:53:05.842703 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:53:05.842708 | orchestrator | 2025-09-03 00:53:05.842714 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-03 00:53:05.842719 | orchestrator | Wednesday 03 September 2025 00:46:30 +0000 (0:00:00.784) 0:04:06.588 *** 2025-09-03 00:53:05.842725 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-09-03 00:53:05.842730 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.842735 | orchestrator | 2025-09-03 00:53:05.842741 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-03 00:53:05.842746 | orchestrator | Wednesday 03 September 2025 00:46:51 +0000 (0:00:21.814) 0:04:28.402 *** 2025-09-03 00:53:05.842752 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.842757 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.842763 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.842768 | orchestrator | 2025-09-03 00:53:05.842773 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-03 00:53:05.842779 | orchestrator | Wednesday 03 September 2025 00:47:02 +0000 (0:00:10.352) 0:04:38.755 *** 2025-09-03 00:53:05.842784 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.842790 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.842795 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.842801 | orchestrator | 2025-09-03 00:53:05.842809 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-03 00:53:05.842814 | orchestrator | Wednesday 03 September 2025 00:47:02 +0000 (0:00:00.280) 0:04:39.035 *** 2025-09-03 00:53:05.842835 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ebb57cfb9c80fd8c1b31a0528b8eb0c44d2d9e2b'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-03 00:53:05.842844 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ebb57cfb9c80fd8c1b31a0528b8eb0c44d2d9e2b'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-03 00:53:05.842850 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ebb57cfb9c80fd8c1b31a0528b8eb0c44d2d9e2b'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-03 00:53:05.842861 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ebb57cfb9c80fd8c1b31a0528b8eb0c44d2d9e2b'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-03 00:53:05.842868 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ebb57cfb9c80fd8c1b31a0528b8eb0c44d2d9e2b'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-03 00:53:05.842875 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ebb57cfb9c80fd8c1b31a0528b8eb0c44d2d9e2b'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__ebb57cfb9c80fd8c1b31a0528b8eb0c44d2d9e2b'}])  2025-09-03 00:53:05.842881 | orchestrator | 2025-09-03 00:53:05.842887 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-03 00:53:05.842892 | orchestrator | Wednesday 03 September 2025 00:47:16 +0000 (0:00:14.221) 0:04:53.257 *** 2025-09-03 00:53:05.842898 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.842903 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.842909 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.842914 | orchestrator | 2025-09-03 00:53:05.842920 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-03 00:53:05.842925 | orchestrator | Wednesday 03 September 2025 00:47:17 +0000 (0:00:00.273) 0:04:53.531 *** 2025-09-03 00:53:05.842931 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:53:05.842936 | orchestrator | 2025-09-03 00:53:05.842942 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-03 00:53:05.842947 | orchestrator | Wednesday 03 September 2025 00:47:17 +0000 (0:00:00.400) 0:04:53.932 *** 2025-09-03 00:53:05.842953 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.842959 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.842974 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.842979 | orchestrator | 2025-09-03 00:53:05.842985 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-03 00:53:05.842990 | orchestrator | Wednesday 03 September 2025 00:47:17 +0000 (0:00:00.386) 0:04:54.319 *** 2025-09-03 00:53:05.842996 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.843001 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.843007 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.843012 | orchestrator | 2025-09-03 00:53:05.843018 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-03 00:53:05.843023 | orchestrator | Wednesday 03 September 2025 00:47:18 +0000 (0:00:00.221) 0:04:54.540 *** 2025-09-03 00:53:05.843029 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-03 00:53:05.843034 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-03 00:53:05.843040 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-03 00:53:05.843045 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.843051 | orchestrator | 2025-09-03 00:53:05.843056 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-03 00:53:05.843068 | orchestrator | Wednesday 03 September 2025 00:47:18 +0000 (0:00:00.429) 0:04:54.970 *** 2025-09-03 00:53:05.843074 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.843079 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.843085 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.843090 | orchestrator | 2025-09-03 00:53:05.843110 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-03 00:53:05.843116 | orchestrator | 2025-09-03 00:53:05.843122 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-03 00:53:05.843127 | orchestrator | Wednesday 03 September 2025 00:47:19 +0000 (0:00:00.603) 0:04:55.574 *** 2025-09-03 00:53:05.843133 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:53:05.843138 | orchestrator | 2025-09-03 00:53:05.843144 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-03 00:53:05.843149 | orchestrator | Wednesday 03 September 2025 00:47:19 +0000 (0:00:00.425) 0:04:55.999 *** 2025-09-03 00:53:05.843155 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:53:05.843161 | orchestrator | 2025-09-03 00:53:05.843166 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-03 00:53:05.843172 | orchestrator | Wednesday 03 September 2025 00:47:19 +0000 (0:00:00.426) 0:04:56.426 *** 2025-09-03 00:53:05.843177 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.843183 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.843188 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.843194 | orchestrator | 2025-09-03 00:53:05.843200 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-03 00:53:05.843205 | orchestrator | Wednesday 03 September 2025 00:47:20 +0000 (0:00:00.766) 0:04:57.193 *** 2025-09-03 00:53:05.843210 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.843216 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.843222 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.843227 | orchestrator | 2025-09-03 00:53:05.843232 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-03 00:53:05.843238 | orchestrator | Wednesday 03 September 2025 00:47:20 +0000 (0:00:00.265) 0:04:57.459 *** 2025-09-03 00:53:05.843243 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.843249 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.843255 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.843260 | orchestrator | 2025-09-03 00:53:05.843266 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-03 00:53:05.843271 | orchestrator | Wednesday 03 September 2025 00:47:21 +0000 (0:00:00.237) 0:04:57.696 *** 2025-09-03 00:53:05.843277 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.843282 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.843288 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.843293 | orchestrator | 2025-09-03 00:53:05.843299 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-03 00:53:05.843304 | orchestrator | Wednesday 03 September 2025 00:47:21 +0000 (0:00:00.238) 0:04:57.935 *** 2025-09-03 00:53:05.843310 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.843315 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.843321 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.843327 | orchestrator | 2025-09-03 00:53:05.843332 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-03 00:53:05.843338 | orchestrator | Wednesday 03 September 2025 00:47:22 +0000 (0:00:00.803) 0:04:58.738 *** 2025-09-03 00:53:05.843343 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.843349 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.843355 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.843360 | orchestrator | 2025-09-03 00:53:05.843365 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-03 00:53:05.843374 | orchestrator | Wednesday 03 September 2025 00:47:22 +0000 (0:00:00.294) 0:04:59.033 *** 2025-09-03 00:53:05.843380 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.843386 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.843391 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.843397 | orchestrator | 2025-09-03 00:53:05.843402 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-03 00:53:05.843408 | orchestrator | Wednesday 03 September 2025 00:47:22 +0000 (0:00:00.245) 0:04:59.279 *** 2025-09-03 00:53:05.843413 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.843419 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.843424 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.843430 | orchestrator | 2025-09-03 00:53:05.843435 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-03 00:53:05.843441 | orchestrator | Wednesday 03 September 2025 00:47:23 +0000 (0:00:00.668) 0:04:59.947 *** 2025-09-03 00:53:05.843446 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.843452 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.843457 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.843463 | orchestrator | 2025-09-03 00:53:05.843468 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-03 00:53:05.843474 | orchestrator | Wednesday 03 September 2025 00:47:24 +0000 (0:00:00.923) 0:05:00.871 *** 2025-09-03 00:53:05.843479 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.843485 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.843490 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.843496 | orchestrator | 2025-09-03 00:53:05.843501 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-03 00:53:05.843507 | orchestrator | Wednesday 03 September 2025 00:47:24 +0000 (0:00:00.309) 0:05:01.181 *** 2025-09-03 00:53:05.843512 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.843518 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.843523 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.843529 | orchestrator | 2025-09-03 00:53:05.843534 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-03 00:53:05.843540 | orchestrator | Wednesday 03 September 2025 00:47:24 +0000 (0:00:00.316) 0:05:01.497 *** 2025-09-03 00:53:05.843545 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.843551 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.843557 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.843562 | orchestrator | 2025-09-03 00:53:05.843571 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-03 00:53:05.843576 | orchestrator | Wednesday 03 September 2025 00:47:25 +0000 (0:00:00.340) 0:05:01.837 *** 2025-09-03 00:53:05.843582 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.843587 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.843608 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.843614 | orchestrator | 2025-09-03 00:53:05.843620 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-03 00:53:05.843625 | orchestrator | Wednesday 03 September 2025 00:47:25 +0000 (0:00:00.556) 0:05:02.394 *** 2025-09-03 00:53:05.843631 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.843636 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.843642 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.843647 | orchestrator | 2025-09-03 00:53:05.843653 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-03 00:53:05.843659 | orchestrator | Wednesday 03 September 2025 00:47:26 +0000 (0:00:00.280) 0:05:02.674 *** 2025-09-03 00:53:05.843664 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.843670 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.843675 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.843681 | orchestrator | 2025-09-03 00:53:05.843686 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-03 00:53:05.843692 | orchestrator | Wednesday 03 September 2025 00:47:26 +0000 (0:00:00.252) 0:05:02.927 *** 2025-09-03 00:53:05.843700 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.843706 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.843711 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.843717 | orchestrator | 2025-09-03 00:53:05.843723 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-03 00:53:05.843728 | orchestrator | Wednesday 03 September 2025 00:47:26 +0000 (0:00:00.268) 0:05:03.195 *** 2025-09-03 00:53:05.843734 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.843739 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.843745 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.843750 | orchestrator | 2025-09-03 00:53:05.843756 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-03 00:53:05.843761 | orchestrator | Wednesday 03 September 2025 00:47:26 +0000 (0:00:00.277) 0:05:03.473 *** 2025-09-03 00:53:05.843767 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.843773 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.843778 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.843784 | orchestrator | 2025-09-03 00:53:05.843789 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-03 00:53:05.843795 | orchestrator | Wednesday 03 September 2025 00:47:27 +0000 (0:00:00.464) 0:05:03.937 *** 2025-09-03 00:53:05.843800 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.843806 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.843811 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.843817 | orchestrator | 2025-09-03 00:53:05.843823 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-03 00:53:05.843828 | orchestrator | Wednesday 03 September 2025 00:47:27 +0000 (0:00:00.463) 0:05:04.401 *** 2025-09-03 00:53:05.843834 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-03 00:53:05.843839 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-03 00:53:05.843845 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-03 00:53:05.843850 | orchestrator | 2025-09-03 00:53:05.843856 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-03 00:53:05.843861 | orchestrator | Wednesday 03 September 2025 00:47:28 +0000 (0:00:00.663) 0:05:05.064 *** 2025-09-03 00:53:05.843867 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:53:05.843872 | orchestrator | 2025-09-03 00:53:05.843878 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-03 00:53:05.843883 | orchestrator | Wednesday 03 September 2025 00:47:29 +0000 (0:00:00.491) 0:05:05.556 *** 2025-09-03 00:53:05.843889 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.843894 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.843900 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.843905 | orchestrator | 2025-09-03 00:53:05.843911 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-03 00:53:05.843917 | orchestrator | Wednesday 03 September 2025 00:47:29 +0000 (0:00:00.614) 0:05:06.170 *** 2025-09-03 00:53:05.843922 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.843928 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.843933 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.843939 | orchestrator | 2025-09-03 00:53:05.843944 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-03 00:53:05.843950 | orchestrator | Wednesday 03 September 2025 00:47:29 +0000 (0:00:00.251) 0:05:06.422 *** 2025-09-03 00:53:05.843955 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-03 00:53:05.843984 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-03 00:53:05.843991 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-03 00:53:05.843997 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-03 00:53:05.844002 | orchestrator | 2025-09-03 00:53:05.844008 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-03 00:53:05.844017 | orchestrator | Wednesday 03 September 2025 00:47:40 +0000 (0:00:10.651) 0:05:17.073 *** 2025-09-03 00:53:05.844022 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.844028 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.844033 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.844039 | orchestrator | 2025-09-03 00:53:05.844044 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-03 00:53:05.844050 | orchestrator | Wednesday 03 September 2025 00:47:41 +0000 (0:00:00.505) 0:05:17.578 *** 2025-09-03 00:53:05.844055 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-03 00:53:05.844061 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-03 00:53:05.844066 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-03 00:53:05.844072 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-03 00:53:05.844080 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:53:05.844086 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:53:05.844091 | orchestrator | 2025-09-03 00:53:05.844110 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-03 00:53:05.844117 | orchestrator | Wednesday 03 September 2025 00:47:43 +0000 (0:00:02.221) 0:05:19.801 *** 2025-09-03 00:53:05.844122 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-03 00:53:05.844128 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-03 00:53:05.844133 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-03 00:53:05.844139 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-03 00:53:05.844144 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-03 00:53:05.844150 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-03 00:53:05.844155 | orchestrator | 2025-09-03 00:53:05.844161 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-03 00:53:05.844166 | orchestrator | Wednesday 03 September 2025 00:47:44 +0000 (0:00:01.225) 0:05:21.026 *** 2025-09-03 00:53:05.844172 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.844177 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.844183 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.844188 | orchestrator | 2025-09-03 00:53:05.844194 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-03 00:53:05.844199 | orchestrator | Wednesday 03 September 2025 00:47:45 +0000 (0:00:00.775) 0:05:21.802 *** 2025-09-03 00:53:05.844205 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.844210 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.844216 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.844221 | orchestrator | 2025-09-03 00:53:05.844227 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-03 00:53:05.844232 | orchestrator | Wednesday 03 September 2025 00:47:45 +0000 (0:00:00.313) 0:05:22.115 *** 2025-09-03 00:53:05.844238 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.844243 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.844249 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.844255 | orchestrator | 2025-09-03 00:53:05.844260 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-03 00:53:05.844265 | orchestrator | Wednesday 03 September 2025 00:47:46 +0000 (0:00:00.564) 0:05:22.680 *** 2025-09-03 00:53:05.844271 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:53:05.844276 | orchestrator | 2025-09-03 00:53:05.844282 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-03 00:53:05.844287 | orchestrator | Wednesday 03 September 2025 00:47:46 +0000 (0:00:00.585) 0:05:23.266 *** 2025-09-03 00:53:05.844293 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.844298 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.844304 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.844309 | orchestrator | 2025-09-03 00:53:05.844315 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-03 00:53:05.844324 | orchestrator | Wednesday 03 September 2025 00:47:47 +0000 (0:00:00.292) 0:05:23.559 *** 2025-09-03 00:53:05.844330 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.844336 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.844341 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.844347 | orchestrator | 2025-09-03 00:53:05.844352 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-03 00:53:05.844358 | orchestrator | Wednesday 03 September 2025 00:47:47 +0000 (0:00:00.600) 0:05:24.159 *** 2025-09-03 00:53:05.844363 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1, testbed-node-0, testbed-node-2 2025-09-03 00:53:05.844369 | orchestrator | 2025-09-03 00:53:05.844374 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-03 00:53:05.844380 | orchestrator | Wednesday 03 September 2025 00:47:48 +0000 (0:00:00.569) 0:05:24.729 *** 2025-09-03 00:53:05.844385 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.844391 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.844396 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.844402 | orchestrator | 2025-09-03 00:53:05.844407 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-03 00:53:05.844413 | orchestrator | Wednesday 03 September 2025 00:47:49 +0000 (0:00:01.110) 0:05:25.839 *** 2025-09-03 00:53:05.844418 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.844424 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.844429 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.844435 | orchestrator | 2025-09-03 00:53:05.844440 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-03 00:53:05.844446 | orchestrator | Wednesday 03 September 2025 00:47:50 +0000 (0:00:01.381) 0:05:27.220 *** 2025-09-03 00:53:05.844451 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.844457 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.844462 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.844468 | orchestrator | 2025-09-03 00:53:05.844473 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-03 00:53:05.844479 | orchestrator | Wednesday 03 September 2025 00:47:52 +0000 (0:00:01.684) 0:05:28.905 *** 2025-09-03 00:53:05.844484 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.844490 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.844495 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.844500 | orchestrator | 2025-09-03 00:53:05.844505 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-03 00:53:05.844510 | orchestrator | Wednesday 03 September 2025 00:47:54 +0000 (0:00:02.073) 0:05:30.978 *** 2025-09-03 00:53:05.844515 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.844520 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.844525 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-03 00:53:05.844530 | orchestrator | 2025-09-03 00:53:05.844535 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-03 00:53:05.844542 | orchestrator | Wednesday 03 September 2025 00:47:54 +0000 (0:00:00.424) 0:05:31.402 *** 2025-09-03 00:53:05.844547 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-03 00:53:05.844565 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-03 00:53:05.844571 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-03 00:53:05.844576 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-03 00:53:05.844581 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-09-03 00:53:05.844586 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-03 00:53:05.844596 | orchestrator | 2025-09-03 00:53:05.844601 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-03 00:53:05.844606 | orchestrator | Wednesday 03 September 2025 00:48:25 +0000 (0:00:30.534) 0:06:01.937 *** 2025-09-03 00:53:05.844610 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-03 00:53:05.844615 | orchestrator | 2025-09-03 00:53:05.844620 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-03 00:53:05.844625 | orchestrator | Wednesday 03 September 2025 00:48:26 +0000 (0:00:01.279) 0:06:03.217 *** 2025-09-03 00:53:05.844630 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.844635 | orchestrator | 2025-09-03 00:53:05.844640 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-03 00:53:05.844645 | orchestrator | Wednesday 03 September 2025 00:48:26 +0000 (0:00:00.292) 0:06:03.509 *** 2025-09-03 00:53:05.844650 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.844655 | orchestrator | 2025-09-03 00:53:05.844660 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-03 00:53:05.844665 | orchestrator | Wednesday 03 September 2025 00:48:27 +0000 (0:00:00.168) 0:06:03.677 *** 2025-09-03 00:53:05.844670 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-03 00:53:05.844674 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-03 00:53:05.844679 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-03 00:53:05.844684 | orchestrator | 2025-09-03 00:53:05.844689 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-03 00:53:05.844694 | orchestrator | Wednesday 03 September 2025 00:48:33 +0000 (0:00:06.347) 0:06:10.025 *** 2025-09-03 00:53:05.844699 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-03 00:53:05.844704 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-03 00:53:05.844709 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-03 00:53:05.844714 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-03 00:53:05.844719 | orchestrator | 2025-09-03 00:53:05.844724 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-03 00:53:05.844729 | orchestrator | Wednesday 03 September 2025 00:48:38 +0000 (0:00:04.841) 0:06:14.867 *** 2025-09-03 00:53:05.844733 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.844738 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.844743 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.844748 | orchestrator | 2025-09-03 00:53:05.844753 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-03 00:53:05.844758 | orchestrator | Wednesday 03 September 2025 00:48:39 +0000 (0:00:01.105) 0:06:15.972 *** 2025-09-03 00:53:05.844763 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:53:05.844768 | orchestrator | 2025-09-03 00:53:05.844773 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-03 00:53:05.844778 | orchestrator | Wednesday 03 September 2025 00:48:40 +0000 (0:00:00.574) 0:06:16.547 *** 2025-09-03 00:53:05.844783 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.844788 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.844793 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.844798 | orchestrator | 2025-09-03 00:53:05.844803 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-03 00:53:05.844808 | orchestrator | Wednesday 03 September 2025 00:48:40 +0000 (0:00:00.343) 0:06:16.890 *** 2025-09-03 00:53:05.844812 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.844818 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.844823 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.844827 | orchestrator | 2025-09-03 00:53:05.844832 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-03 00:53:05.844841 | orchestrator | Wednesday 03 September 2025 00:48:41 +0000 (0:00:01.373) 0:06:18.264 *** 2025-09-03 00:53:05.844846 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-03 00:53:05.844851 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-03 00:53:05.844856 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-03 00:53:05.844861 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.844865 | orchestrator | 2025-09-03 00:53:05.844870 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-03 00:53:05.844875 | orchestrator | Wednesday 03 September 2025 00:48:42 +0000 (0:00:00.614) 0:06:18.878 *** 2025-09-03 00:53:05.844880 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.844885 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.844890 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.844895 | orchestrator | 2025-09-03 00:53:05.844900 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-03 00:53:05.844905 | orchestrator | 2025-09-03 00:53:05.844910 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-03 00:53:05.844917 | orchestrator | Wednesday 03 September 2025 00:48:42 +0000 (0:00:00.531) 0:06:19.410 *** 2025-09-03 00:53:05.844922 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.844927 | orchestrator | 2025-09-03 00:53:05.844945 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-03 00:53:05.844950 | orchestrator | Wednesday 03 September 2025 00:48:43 +0000 (0:00:00.829) 0:06:20.240 *** 2025-09-03 00:53:05.844955 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.844960 | orchestrator | 2025-09-03 00:53:05.844974 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-03 00:53:05.844979 | orchestrator | Wednesday 03 September 2025 00:48:44 +0000 (0:00:00.583) 0:06:20.824 *** 2025-09-03 00:53:05.844984 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.844989 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.844994 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.844999 | orchestrator | 2025-09-03 00:53:05.845004 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-03 00:53:05.845008 | orchestrator | Wednesday 03 September 2025 00:48:44 +0000 (0:00:00.313) 0:06:21.138 *** 2025-09-03 00:53:05.845013 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.845018 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.845023 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.845028 | orchestrator | 2025-09-03 00:53:05.845033 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-03 00:53:05.845038 | orchestrator | Wednesday 03 September 2025 00:48:45 +0000 (0:00:00.954) 0:06:22.093 *** 2025-09-03 00:53:05.845043 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.845048 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.845053 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.845058 | orchestrator | 2025-09-03 00:53:05.845062 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-03 00:53:05.845067 | orchestrator | Wednesday 03 September 2025 00:48:46 +0000 (0:00:00.693) 0:06:22.787 *** 2025-09-03 00:53:05.845072 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.845077 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.845082 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.845087 | orchestrator | 2025-09-03 00:53:05.845092 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-03 00:53:05.845097 | orchestrator | Wednesday 03 September 2025 00:48:47 +0000 (0:00:00.742) 0:06:23.529 *** 2025-09-03 00:53:05.845102 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.845107 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.845111 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.845116 | orchestrator | 2025-09-03 00:53:05.845127 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-03 00:53:05.845132 | orchestrator | Wednesday 03 September 2025 00:48:47 +0000 (0:00:00.300) 0:06:23.830 *** 2025-09-03 00:53:05.845137 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.845142 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.845147 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.845152 | orchestrator | 2025-09-03 00:53:05.845157 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-03 00:53:05.845162 | orchestrator | Wednesday 03 September 2025 00:48:47 +0000 (0:00:00.597) 0:06:24.427 *** 2025-09-03 00:53:05.845167 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.845171 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.845176 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.845181 | orchestrator | 2025-09-03 00:53:05.845186 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-03 00:53:05.845191 | orchestrator | Wednesday 03 September 2025 00:48:48 +0000 (0:00:00.305) 0:06:24.733 *** 2025-09-03 00:53:05.845196 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.845201 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.845206 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.845211 | orchestrator | 2025-09-03 00:53:05.845216 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-03 00:53:05.845220 | orchestrator | Wednesday 03 September 2025 00:48:48 +0000 (0:00:00.682) 0:06:25.415 *** 2025-09-03 00:53:05.845225 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.845230 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.845235 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.845240 | orchestrator | 2025-09-03 00:53:05.845245 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-03 00:53:05.845250 | orchestrator | Wednesday 03 September 2025 00:48:49 +0000 (0:00:00.685) 0:06:26.101 *** 2025-09-03 00:53:05.845255 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.845260 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.845265 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.845270 | orchestrator | 2025-09-03 00:53:05.845275 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-03 00:53:05.845280 | orchestrator | Wednesday 03 September 2025 00:48:50 +0000 (0:00:00.687) 0:06:26.789 *** 2025-09-03 00:53:05.845285 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.845290 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.845295 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.845300 | orchestrator | 2025-09-03 00:53:05.845305 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-03 00:53:05.845310 | orchestrator | Wednesday 03 September 2025 00:48:50 +0000 (0:00:00.357) 0:06:27.146 *** 2025-09-03 00:53:05.845314 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.845319 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.845324 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.845329 | orchestrator | 2025-09-03 00:53:05.845334 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-03 00:53:05.845339 | orchestrator | Wednesday 03 September 2025 00:48:50 +0000 (0:00:00.320) 0:06:27.467 *** 2025-09-03 00:53:05.845344 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.845349 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.845354 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.845358 | orchestrator | 2025-09-03 00:53:05.845363 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-03 00:53:05.845371 | orchestrator | Wednesday 03 September 2025 00:48:51 +0000 (0:00:00.313) 0:06:27.780 *** 2025-09-03 00:53:05.845376 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.845381 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.845386 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.845391 | orchestrator | 2025-09-03 00:53:05.845399 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-03 00:53:05.845408 | orchestrator | Wednesday 03 September 2025 00:48:51 +0000 (0:00:00.694) 0:06:28.475 *** 2025-09-03 00:53:05.845413 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.845418 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.845423 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.845428 | orchestrator | 2025-09-03 00:53:05.845433 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-03 00:53:05.845438 | orchestrator | Wednesday 03 September 2025 00:48:52 +0000 (0:00:00.315) 0:06:28.790 *** 2025-09-03 00:53:05.845443 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.845448 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.845453 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.845458 | orchestrator | 2025-09-03 00:53:05.845463 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-03 00:53:05.845468 | orchestrator | Wednesday 03 September 2025 00:48:52 +0000 (0:00:00.250) 0:06:29.041 *** 2025-09-03 00:53:05.845473 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.845478 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.845483 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.845487 | orchestrator | 2025-09-03 00:53:05.845492 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-03 00:53:05.845497 | orchestrator | Wednesday 03 September 2025 00:48:52 +0000 (0:00:00.209) 0:06:29.251 *** 2025-09-03 00:53:05.845502 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.845507 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.845512 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.845517 | orchestrator | 2025-09-03 00:53:05.845522 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-03 00:53:05.845527 | orchestrator | Wednesday 03 September 2025 00:48:53 +0000 (0:00:00.374) 0:06:29.625 *** 2025-09-03 00:53:05.845532 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.845537 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.845542 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.845547 | orchestrator | 2025-09-03 00:53:05.845552 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-03 00:53:05.845557 | orchestrator | Wednesday 03 September 2025 00:48:53 +0000 (0:00:00.397) 0:06:30.023 *** 2025-09-03 00:53:05.845561 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.845566 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.845571 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.845576 | orchestrator | 2025-09-03 00:53:05.845581 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-03 00:53:05.845586 | orchestrator | Wednesday 03 September 2025 00:48:53 +0000 (0:00:00.237) 0:06:30.261 *** 2025-09-03 00:53:05.845591 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-03 00:53:05.845596 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-03 00:53:05.845601 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-03 00:53:05.845605 | orchestrator | 2025-09-03 00:53:05.845610 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-03 00:53:05.845615 | orchestrator | Wednesday 03 September 2025 00:48:54 +0000 (0:00:00.742) 0:06:31.003 *** 2025-09-03 00:53:05.845620 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.845625 | orchestrator | 2025-09-03 00:53:05.845630 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-03 00:53:05.845635 | orchestrator | Wednesday 03 September 2025 00:48:55 +0000 (0:00:00.671) 0:06:31.674 *** 2025-09-03 00:53:05.845639 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.845644 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.845649 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.845654 | orchestrator | 2025-09-03 00:53:05.845659 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-03 00:53:05.845667 | orchestrator | Wednesday 03 September 2025 00:48:55 +0000 (0:00:00.277) 0:06:31.952 *** 2025-09-03 00:53:05.845672 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.845677 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.845682 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.845687 | orchestrator | 2025-09-03 00:53:05.845692 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-03 00:53:05.845697 | orchestrator | Wednesday 03 September 2025 00:48:55 +0000 (0:00:00.276) 0:06:32.228 *** 2025-09-03 00:53:05.845702 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.845707 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.845712 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.845716 | orchestrator | 2025-09-03 00:53:05.845721 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-03 00:53:05.845726 | orchestrator | Wednesday 03 September 2025 00:48:56 +0000 (0:00:00.737) 0:06:32.966 *** 2025-09-03 00:53:05.845731 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.845736 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.845741 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.845746 | orchestrator | 2025-09-03 00:53:05.845751 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-03 00:53:05.845756 | orchestrator | Wednesday 03 September 2025 00:48:56 +0000 (0:00:00.302) 0:06:33.268 *** 2025-09-03 00:53:05.845761 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-03 00:53:05.845765 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-03 00:53:05.845770 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-03 00:53:05.845778 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-03 00:53:05.845783 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-03 00:53:05.845787 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-03 00:53:05.845797 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-03 00:53:05.845802 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-03 00:53:05.845807 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-03 00:53:05.845811 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-03 00:53:05.845816 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-03 00:53:05.845821 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-03 00:53:05.845826 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-03 00:53:05.845831 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-03 00:53:05.845836 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-03 00:53:05.845840 | orchestrator | 2025-09-03 00:53:05.845845 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-03 00:53:05.845850 | orchestrator | Wednesday 03 September 2025 00:48:59 +0000 (0:00:02.934) 0:06:36.202 *** 2025-09-03 00:53:05.845855 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.845860 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.845865 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.845870 | orchestrator | 2025-09-03 00:53:05.845874 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-03 00:53:05.845879 | orchestrator | Wednesday 03 September 2025 00:48:59 +0000 (0:00:00.273) 0:06:36.476 *** 2025-09-03 00:53:05.845884 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.845892 | orchestrator | 2025-09-03 00:53:05.845897 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-03 00:53:05.845902 | orchestrator | Wednesday 03 September 2025 00:49:00 +0000 (0:00:00.768) 0:06:37.244 *** 2025-09-03 00:53:05.845907 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-03 00:53:05.845912 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-03 00:53:05.845916 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-03 00:53:05.845921 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-03 00:53:05.845926 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-03 00:53:05.845931 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-03 00:53:05.845936 | orchestrator | 2025-09-03 00:53:05.845941 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-03 00:53:05.845946 | orchestrator | Wednesday 03 September 2025 00:49:01 +0000 (0:00:00.919) 0:06:38.164 *** 2025-09-03 00:53:05.845950 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:53:05.845955 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-03 00:53:05.845960 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-03 00:53:05.845988 | orchestrator | 2025-09-03 00:53:05.845993 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-03 00:53:05.845998 | orchestrator | Wednesday 03 September 2025 00:49:03 +0000 (0:00:01.930) 0:06:40.094 *** 2025-09-03 00:53:05.846003 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-03 00:53:05.846008 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-03 00:53:05.846013 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.846034 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-03 00:53:05.846040 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-03 00:53:05.846045 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.846050 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-03 00:53:05.846054 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-03 00:53:05.846059 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.846064 | orchestrator | 2025-09-03 00:53:05.846069 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-03 00:53:05.846074 | orchestrator | Wednesday 03 September 2025 00:49:04 +0000 (0:00:01.270) 0:06:41.364 *** 2025-09-03 00:53:05.846079 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-03 00:53:05.846084 | orchestrator | 2025-09-03 00:53:05.846089 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-03 00:53:05.846093 | orchestrator | Wednesday 03 September 2025 00:49:06 +0000 (0:00:01.951) 0:06:43.316 *** 2025-09-03 00:53:05.846098 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.846103 | orchestrator | 2025-09-03 00:53:05.846108 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-03 00:53:05.846113 | orchestrator | Wednesday 03 September 2025 00:49:07 +0000 (0:00:00.587) 0:06:43.904 *** 2025-09-03 00:53:05.846118 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e75c81d9-f6c1-538f-9534-cc9e3445127a', 'data_vg': 'ceph-e75c81d9-f6c1-538f-9534-cc9e3445127a'}) 2025-09-03 00:53:05.846126 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d05881db-8953-52a0-98ec-dd1036bee846', 'data_vg': 'ceph-d05881db-8953-52a0-98ec-dd1036bee846'}) 2025-09-03 00:53:05.846131 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-400ae980-4c36-5b9b-960d-631158f9c2c9', 'data_vg': 'ceph-400ae980-4c36-5b9b-960d-631158f9c2c9'}) 2025-09-03 00:53:05.846141 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-634e15af-8858-53e6-9f62-917e12b08878', 'data_vg': 'ceph-634e15af-8858-53e6-9f62-917e12b08878'}) 2025-09-03 00:53:05.846150 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5', 'data_vg': 'ceph-2e5a0ee6-219f-5b14-b340-2bfd497a8fc5'}) 2025-09-03 00:53:05.846155 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1107a6cb-8e5a-5215-8b60-1d473d685075', 'data_vg': 'ceph-1107a6cb-8e5a-5215-8b60-1d473d685075'}) 2025-09-03 00:53:05.846160 | orchestrator | 2025-09-03 00:53:05.846165 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-03 00:53:05.846170 | orchestrator | Wednesday 03 September 2025 00:49:52 +0000 (0:00:44.961) 0:07:28.865 *** 2025-09-03 00:53:05.846175 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.846180 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.846185 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.846190 | orchestrator | 2025-09-03 00:53:05.846195 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-03 00:53:05.846200 | orchestrator | Wednesday 03 September 2025 00:49:52 +0000 (0:00:00.489) 0:07:29.354 *** 2025-09-03 00:53:05.846204 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.846209 | orchestrator | 2025-09-03 00:53:05.846214 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-03 00:53:05.846219 | orchestrator | Wednesday 03 September 2025 00:49:53 +0000 (0:00:00.501) 0:07:29.856 *** 2025-09-03 00:53:05.846224 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.846229 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.846234 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.846239 | orchestrator | 2025-09-03 00:53:05.846244 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-03 00:53:05.846249 | orchestrator | Wednesday 03 September 2025 00:49:53 +0000 (0:00:00.653) 0:07:30.510 *** 2025-09-03 00:53:05.846253 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.846258 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.846263 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.846268 | orchestrator | 2025-09-03 00:53:05.846273 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-03 00:53:05.846278 | orchestrator | Wednesday 03 September 2025 00:49:56 +0000 (0:00:02.915) 0:07:33.426 *** 2025-09-03 00:53:05.846283 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.846287 | orchestrator | 2025-09-03 00:53:05.846292 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-03 00:53:05.846297 | orchestrator | Wednesday 03 September 2025 00:49:57 +0000 (0:00:00.508) 0:07:33.934 *** 2025-09-03 00:53:05.846302 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.846307 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.846312 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.846317 | orchestrator | 2025-09-03 00:53:05.846322 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-03 00:53:05.846326 | orchestrator | Wednesday 03 September 2025 00:49:58 +0000 (0:00:01.082) 0:07:35.017 *** 2025-09-03 00:53:05.846331 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.846336 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.846341 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.846346 | orchestrator | 2025-09-03 00:53:05.846351 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-03 00:53:05.846356 | orchestrator | Wednesday 03 September 2025 00:49:59 +0000 (0:00:01.322) 0:07:36.339 *** 2025-09-03 00:53:05.846360 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.846365 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.846370 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.846375 | orchestrator | 2025-09-03 00:53:05.846380 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-03 00:53:05.846384 | orchestrator | Wednesday 03 September 2025 00:50:01 +0000 (0:00:01.705) 0:07:38.045 *** 2025-09-03 00:53:05.846392 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.846397 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.846401 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.846406 | orchestrator | 2025-09-03 00:53:05.846411 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-03 00:53:05.846415 | orchestrator | Wednesday 03 September 2025 00:50:01 +0000 (0:00:00.330) 0:07:38.375 *** 2025-09-03 00:53:05.846420 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.846425 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.846429 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.846434 | orchestrator | 2025-09-03 00:53:05.846439 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-03 00:53:05.846443 | orchestrator | Wednesday 03 September 2025 00:50:02 +0000 (0:00:00.321) 0:07:38.697 *** 2025-09-03 00:53:05.846448 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-09-03 00:53:05.846453 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-09-03 00:53:05.846457 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-09-03 00:53:05.846462 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-03 00:53:05.846467 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-09-03 00:53:05.846471 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-09-03 00:53:05.846476 | orchestrator | 2025-09-03 00:53:05.846481 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-03 00:53:05.846485 | orchestrator | Wednesday 03 September 2025 00:50:03 +0000 (0:00:01.191) 0:07:39.888 *** 2025-09-03 00:53:05.846490 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-09-03 00:53:05.846495 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-09-03 00:53:05.846502 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-03 00:53:05.846507 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-03 00:53:05.846511 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-03 00:53:05.846516 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-09-03 00:53:05.846521 | orchestrator | 2025-09-03 00:53:05.846528 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-03 00:53:05.846533 | orchestrator | Wednesday 03 September 2025 00:50:05 +0000 (0:00:02.030) 0:07:41.918 *** 2025-09-03 00:53:05.846538 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-09-03 00:53:05.846542 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-03 00:53:05.846547 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-09-03 00:53:05.846552 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-03 00:53:05.846556 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-03 00:53:05.846561 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-09-03 00:53:05.846565 | orchestrator | 2025-09-03 00:53:05.846570 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-03 00:53:05.846575 | orchestrator | Wednesday 03 September 2025 00:50:08 +0000 (0:00:03.499) 0:07:45.418 *** 2025-09-03 00:53:05.846579 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.846584 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.846589 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-03 00:53:05.846593 | orchestrator | 2025-09-03 00:53:05.846598 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-03 00:53:05.846603 | orchestrator | Wednesday 03 September 2025 00:50:11 +0000 (0:00:02.222) 0:07:47.640 *** 2025-09-03 00:53:05.846607 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.846612 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.846617 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-03 00:53:05.846621 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-03 00:53:05.846626 | orchestrator | 2025-09-03 00:53:05.846631 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-03 00:53:05.846635 | orchestrator | Wednesday 03 September 2025 00:50:24 +0000 (0:00:12.915) 0:08:00.556 *** 2025-09-03 00:53:05.846643 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.846648 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.846653 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.846657 | orchestrator | 2025-09-03 00:53:05.846662 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-03 00:53:05.846667 | orchestrator | Wednesday 03 September 2025 00:50:24 +0000 (0:00:00.793) 0:08:01.350 *** 2025-09-03 00:53:05.846671 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.846676 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.846681 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.846685 | orchestrator | 2025-09-03 00:53:05.846690 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-03 00:53:05.846695 | orchestrator | Wednesday 03 September 2025 00:50:25 +0000 (0:00:00.550) 0:08:01.901 *** 2025-09-03 00:53:05.846699 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.846704 | orchestrator | 2025-09-03 00:53:05.846709 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-03 00:53:05.846713 | orchestrator | Wednesday 03 September 2025 00:50:25 +0000 (0:00:00.529) 0:08:02.430 *** 2025-09-03 00:53:05.846718 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-03 00:53:05.846723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-03 00:53:05.846727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-03 00:53:05.846732 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.846737 | orchestrator | 2025-09-03 00:53:05.846741 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-03 00:53:05.846746 | orchestrator | Wednesday 03 September 2025 00:50:26 +0000 (0:00:00.383) 0:08:02.814 *** 2025-09-03 00:53:05.846751 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.846755 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.846760 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.846765 | orchestrator | 2025-09-03 00:53:05.846769 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-03 00:53:05.846774 | orchestrator | Wednesday 03 September 2025 00:50:26 +0000 (0:00:00.289) 0:08:03.103 *** 2025-09-03 00:53:05.846779 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.846783 | orchestrator | 2025-09-03 00:53:05.846788 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-03 00:53:05.846793 | orchestrator | Wednesday 03 September 2025 00:50:26 +0000 (0:00:00.211) 0:08:03.314 *** 2025-09-03 00:53:05.846797 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.846802 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.846807 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.846811 | orchestrator | 2025-09-03 00:53:05.846816 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-03 00:53:05.846821 | orchestrator | Wednesday 03 September 2025 00:50:27 +0000 (0:00:00.525) 0:08:03.840 *** 2025-09-03 00:53:05.846825 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.846830 | orchestrator | 2025-09-03 00:53:05.846834 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-03 00:53:05.846839 | orchestrator | Wednesday 03 September 2025 00:50:27 +0000 (0:00:00.212) 0:08:04.053 *** 2025-09-03 00:53:05.846844 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.846848 | orchestrator | 2025-09-03 00:53:05.846853 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-03 00:53:05.846857 | orchestrator | Wednesday 03 September 2025 00:50:27 +0000 (0:00:00.205) 0:08:04.258 *** 2025-09-03 00:53:05.846862 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.846867 | orchestrator | 2025-09-03 00:53:05.846871 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-03 00:53:05.846878 | orchestrator | Wednesday 03 September 2025 00:50:27 +0000 (0:00:00.120) 0:08:04.379 *** 2025-09-03 00:53:05.846886 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.846891 | orchestrator | 2025-09-03 00:53:05.846896 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-03 00:53:05.846900 | orchestrator | Wednesday 03 September 2025 00:50:28 +0000 (0:00:00.231) 0:08:04.610 *** 2025-09-03 00:53:05.846908 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.846913 | orchestrator | 2025-09-03 00:53:05.846918 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-03 00:53:05.846923 | orchestrator | Wednesday 03 September 2025 00:50:28 +0000 (0:00:00.203) 0:08:04.813 *** 2025-09-03 00:53:05.846927 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-03 00:53:05.846932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-03 00:53:05.846937 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-03 00:53:05.846942 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.846946 | orchestrator | 2025-09-03 00:53:05.846951 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-03 00:53:05.846955 | orchestrator | Wednesday 03 September 2025 00:50:28 +0000 (0:00:00.363) 0:08:05.177 *** 2025-09-03 00:53:05.846960 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.846976 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.846981 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.846986 | orchestrator | 2025-09-03 00:53:05.846990 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-03 00:53:05.846995 | orchestrator | Wednesday 03 September 2025 00:50:29 +0000 (0:00:00.358) 0:08:05.535 *** 2025-09-03 00:53:05.847000 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.847004 | orchestrator | 2025-09-03 00:53:05.847009 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-03 00:53:05.847014 | orchestrator | Wednesday 03 September 2025 00:50:29 +0000 (0:00:00.731) 0:08:06.266 *** 2025-09-03 00:53:05.847018 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.847023 | orchestrator | 2025-09-03 00:53:05.847027 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-03 00:53:05.847032 | orchestrator | 2025-09-03 00:53:05.847037 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-03 00:53:05.847041 | orchestrator | Wednesday 03 September 2025 00:50:30 +0000 (0:00:00.668) 0:08:06.935 *** 2025-09-03 00:53:05.847046 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:53:05.847051 | orchestrator | 2025-09-03 00:53:05.847055 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-03 00:53:05.847060 | orchestrator | Wednesday 03 September 2025 00:50:31 +0000 (0:00:01.167) 0:08:08.102 *** 2025-09-03 00:53:05.847065 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:53:05.847069 | orchestrator | 2025-09-03 00:53:05.847074 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-03 00:53:05.847078 | orchestrator | Wednesday 03 September 2025 00:50:32 +0000 (0:00:01.141) 0:08:09.244 *** 2025-09-03 00:53:05.847083 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.847088 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.847092 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.847097 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.847102 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.847106 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.847111 | orchestrator | 2025-09-03 00:53:05.847116 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-03 00:53:05.847120 | orchestrator | Wednesday 03 September 2025 00:50:33 +0000 (0:00:01.240) 0:08:10.484 *** 2025-09-03 00:53:05.847125 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.847130 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.847138 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.847142 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.847147 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.847152 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.847156 | orchestrator | 2025-09-03 00:53:05.847161 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-03 00:53:05.847166 | orchestrator | Wednesday 03 September 2025 00:50:34 +0000 (0:00:00.767) 0:08:11.252 *** 2025-09-03 00:53:05.847170 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.847175 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.847180 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.847184 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.847189 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.847194 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.847198 | orchestrator | 2025-09-03 00:53:05.847203 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-03 00:53:05.847208 | orchestrator | Wednesday 03 September 2025 00:50:35 +0000 (0:00:00.889) 0:08:12.141 *** 2025-09-03 00:53:05.847212 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.847217 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.847221 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.847226 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.847230 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.847235 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.847240 | orchestrator | 2025-09-03 00:53:05.847244 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-03 00:53:05.847249 | orchestrator | Wednesday 03 September 2025 00:50:36 +0000 (0:00:00.698) 0:08:12.839 *** 2025-09-03 00:53:05.847254 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.847258 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.847263 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.847268 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.847272 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.847277 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.847281 | orchestrator | 2025-09-03 00:53:05.847286 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-03 00:53:05.847294 | orchestrator | Wednesday 03 September 2025 00:50:37 +0000 (0:00:00.956) 0:08:13.796 *** 2025-09-03 00:53:05.847298 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.847303 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.847308 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.847312 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.847317 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.847325 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.847330 | orchestrator | 2025-09-03 00:53:05.847334 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-03 00:53:05.847339 | orchestrator | Wednesday 03 September 2025 00:50:38 +0000 (0:00:00.913) 0:08:14.709 *** 2025-09-03 00:53:05.847344 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.847349 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.847353 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.847358 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.847363 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.847367 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.847372 | orchestrator | 2025-09-03 00:53:05.847377 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-03 00:53:05.847381 | orchestrator | Wednesday 03 September 2025 00:50:38 +0000 (0:00:00.554) 0:08:15.263 *** 2025-09-03 00:53:05.847386 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.847391 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.847395 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.847400 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.847405 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.847409 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.847418 | orchestrator | 2025-09-03 00:53:05.847423 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-03 00:53:05.847428 | orchestrator | Wednesday 03 September 2025 00:50:39 +0000 (0:00:01.209) 0:08:16.473 *** 2025-09-03 00:53:05.847433 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.847437 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.847442 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.847446 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.847451 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.847456 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.847460 | orchestrator | 2025-09-03 00:53:05.847465 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-03 00:53:05.847470 | orchestrator | Wednesday 03 September 2025 00:50:41 +0000 (0:00:01.067) 0:08:17.540 *** 2025-09-03 00:53:05.847474 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.847479 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.847484 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.847488 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.847493 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.847498 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.847502 | orchestrator | 2025-09-03 00:53:05.847507 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-03 00:53:05.847511 | orchestrator | Wednesday 03 September 2025 00:50:41 +0000 (0:00:00.864) 0:08:18.405 *** 2025-09-03 00:53:05.847516 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.847521 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.847525 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.847530 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.847535 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.847539 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.847544 | orchestrator | 2025-09-03 00:53:05.847549 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-03 00:53:05.847553 | orchestrator | Wednesday 03 September 2025 00:50:42 +0000 (0:00:00.592) 0:08:18.997 *** 2025-09-03 00:53:05.847558 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.847563 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.847567 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.847572 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.847577 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.847581 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.847586 | orchestrator | 2025-09-03 00:53:05.847590 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-03 00:53:05.847595 | orchestrator | Wednesday 03 September 2025 00:50:43 +0000 (0:00:00.794) 0:08:19.792 *** 2025-09-03 00:53:05.847600 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.847604 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.847609 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.847614 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.847618 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.847623 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.847628 | orchestrator | 2025-09-03 00:53:05.847632 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-03 00:53:05.847637 | orchestrator | Wednesday 03 September 2025 00:50:43 +0000 (0:00:00.562) 0:08:20.354 *** 2025-09-03 00:53:05.847642 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.847646 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.847651 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.847656 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.847660 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.847665 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.847669 | orchestrator | 2025-09-03 00:53:05.847674 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-03 00:53:05.847679 | orchestrator | Wednesday 03 September 2025 00:50:44 +0000 (0:00:00.790) 0:08:21.145 *** 2025-09-03 00:53:05.847683 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.847745 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.847750 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.847755 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.847759 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.847764 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.847768 | orchestrator | 2025-09-03 00:53:05.847773 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-03 00:53:05.847778 | orchestrator | Wednesday 03 September 2025 00:50:45 +0000 (0:00:00.559) 0:08:21.704 *** 2025-09-03 00:53:05.847782 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.847787 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.847792 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.847796 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:53:05.847801 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:53:05.847805 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:53:05.847810 | orchestrator | 2025-09-03 00:53:05.847818 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-03 00:53:05.847823 | orchestrator | Wednesday 03 September 2025 00:50:45 +0000 (0:00:00.804) 0:08:22.509 *** 2025-09-03 00:53:05.847827 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.847832 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.847837 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.847845 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.847850 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.847855 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.847859 | orchestrator | 2025-09-03 00:53:05.847864 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-03 00:53:05.847868 | orchestrator | Wednesday 03 September 2025 00:50:46 +0000 (0:00:00.566) 0:08:23.076 *** 2025-09-03 00:53:05.847873 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.847878 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.847882 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.847887 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.847892 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.847896 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.847901 | orchestrator | 2025-09-03 00:53:05.847905 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-03 00:53:05.847910 | orchestrator | Wednesday 03 September 2025 00:50:47 +0000 (0:00:00.828) 0:08:23.904 *** 2025-09-03 00:53:05.847915 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.847919 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.847924 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.847929 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.847933 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.847938 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.847942 | orchestrator | 2025-09-03 00:53:05.847947 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-03 00:53:05.847952 | orchestrator | Wednesday 03 September 2025 00:50:48 +0000 (0:00:01.184) 0:08:25.089 *** 2025-09-03 00:53:05.847956 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-03 00:53:05.847970 | orchestrator | 2025-09-03 00:53:05.847975 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-03 00:53:05.847980 | orchestrator | Wednesday 03 September 2025 00:50:52 +0000 (0:00:03.940) 0:08:29.029 *** 2025-09-03 00:53:05.847985 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-03 00:53:05.847989 | orchestrator | 2025-09-03 00:53:05.847994 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-03 00:53:05.847999 | orchestrator | Wednesday 03 September 2025 00:50:54 +0000 (0:00:02.012) 0:08:31.042 *** 2025-09-03 00:53:05.848003 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.848008 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.848013 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.848017 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.848026 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.848031 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.848036 | orchestrator | 2025-09-03 00:53:05.848041 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-03 00:53:05.848045 | orchestrator | Wednesday 03 September 2025 00:50:56 +0000 (0:00:02.217) 0:08:33.259 *** 2025-09-03 00:53:05.848050 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.848054 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.848059 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.848064 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.848068 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.848073 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.848078 | orchestrator | 2025-09-03 00:53:05.848082 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-03 00:53:05.848087 | orchestrator | Wednesday 03 September 2025 00:50:57 +0000 (0:00:01.188) 0:08:34.448 *** 2025-09-03 00:53:05.848092 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:53:05.848097 | orchestrator | 2025-09-03 00:53:05.848102 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-03 00:53:05.848107 | orchestrator | Wednesday 03 September 2025 00:50:59 +0000 (0:00:01.294) 0:08:35.742 *** 2025-09-03 00:53:05.848111 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.848116 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.848120 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.848125 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.848130 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.848134 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.848139 | orchestrator | 2025-09-03 00:53:05.848144 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-03 00:53:05.848148 | orchestrator | Wednesday 03 September 2025 00:51:00 +0000 (0:00:01.409) 0:08:37.152 *** 2025-09-03 00:53:05.848153 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.848158 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.848162 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.848167 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.848171 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.848176 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.848181 | orchestrator | 2025-09-03 00:53:05.848185 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-03 00:53:05.848190 | orchestrator | Wednesday 03 September 2025 00:51:03 +0000 (0:00:03.236) 0:08:40.388 *** 2025-09-03 00:53:05.848195 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:53:05.848200 | orchestrator | 2025-09-03 00:53:05.848204 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-03 00:53:05.848209 | orchestrator | Wednesday 03 September 2025 00:51:04 +0000 (0:00:01.080) 0:08:41.468 *** 2025-09-03 00:53:05.848214 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.848218 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.848223 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.848228 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.848232 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.848237 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.848242 | orchestrator | 2025-09-03 00:53:05.848249 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-03 00:53:05.848254 | orchestrator | Wednesday 03 September 2025 00:51:05 +0000 (0:00:00.533) 0:08:42.002 *** 2025-09-03 00:53:05.848258 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.848263 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.848268 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.848276 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:53:05.848285 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:53:05.848289 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:53:05.848294 | orchestrator | 2025-09-03 00:53:05.848299 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-03 00:53:05.848304 | orchestrator | Wednesday 03 September 2025 00:51:09 +0000 (0:00:03.552) 0:08:45.554 *** 2025-09-03 00:53:05.848308 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.848313 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.848318 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.848322 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:53:05.848327 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:53:05.848332 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:53:05.848336 | orchestrator | 2025-09-03 00:53:05.848341 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-03 00:53:05.848346 | orchestrator | 2025-09-03 00:53:05.848350 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-03 00:53:05.848355 | orchestrator | Wednesday 03 September 2025 00:51:09 +0000 (0:00:00.727) 0:08:46.282 *** 2025-09-03 00:53:05.848359 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.848364 | orchestrator | 2025-09-03 00:53:05.848369 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-03 00:53:05.848373 | orchestrator | Wednesday 03 September 2025 00:51:10 +0000 (0:00:00.643) 0:08:46.926 *** 2025-09-03 00:53:05.848378 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.848383 | orchestrator | 2025-09-03 00:53:05.848388 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-03 00:53:05.848392 | orchestrator | Wednesday 03 September 2025 00:51:10 +0000 (0:00:00.434) 0:08:47.360 *** 2025-09-03 00:53:05.848397 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.848402 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.848407 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.848411 | orchestrator | 2025-09-03 00:53:05.848416 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-03 00:53:05.848421 | orchestrator | Wednesday 03 September 2025 00:51:11 +0000 (0:00:00.367) 0:08:47.728 *** 2025-09-03 00:53:05.848425 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.848430 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.848435 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.848439 | orchestrator | 2025-09-03 00:53:05.848444 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-03 00:53:05.848448 | orchestrator | Wednesday 03 September 2025 00:51:11 +0000 (0:00:00.615) 0:08:48.343 *** 2025-09-03 00:53:05.848453 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.848458 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.848462 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.848467 | orchestrator | 2025-09-03 00:53:05.848472 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-03 00:53:05.848477 | orchestrator | Wednesday 03 September 2025 00:51:12 +0000 (0:00:00.660) 0:08:49.004 *** 2025-09-03 00:53:05.848481 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.848486 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.848490 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.848495 | orchestrator | 2025-09-03 00:53:05.848500 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-03 00:53:05.848504 | orchestrator | Wednesday 03 September 2025 00:51:13 +0000 (0:00:00.626) 0:08:49.630 *** 2025-09-03 00:53:05.848509 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.848513 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.848518 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.848523 | orchestrator | 2025-09-03 00:53:05.848527 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-03 00:53:05.848532 | orchestrator | Wednesday 03 September 2025 00:51:13 +0000 (0:00:00.422) 0:08:50.053 *** 2025-09-03 00:53:05.848540 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.848545 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.848549 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.848554 | orchestrator | 2025-09-03 00:53:05.848559 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-03 00:53:05.848563 | orchestrator | Wednesday 03 September 2025 00:51:13 +0000 (0:00:00.266) 0:08:50.320 *** 2025-09-03 00:53:05.848568 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.848573 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.848577 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.848582 | orchestrator | 2025-09-03 00:53:05.848587 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-03 00:53:05.848591 | orchestrator | Wednesday 03 September 2025 00:51:14 +0000 (0:00:00.240) 0:08:50.560 *** 2025-09-03 00:53:05.848596 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.848600 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.848605 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.848610 | orchestrator | 2025-09-03 00:53:05.848614 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-03 00:53:05.848619 | orchestrator | Wednesday 03 September 2025 00:51:14 +0000 (0:00:00.648) 0:08:51.208 *** 2025-09-03 00:53:05.848624 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.848628 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.848633 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.848637 | orchestrator | 2025-09-03 00:53:05.848642 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-03 00:53:05.848647 | orchestrator | Wednesday 03 September 2025 00:51:15 +0000 (0:00:00.900) 0:08:52.109 *** 2025-09-03 00:53:05.848651 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.848656 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.848661 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.848665 | orchestrator | 2025-09-03 00:53:05.848670 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-03 00:53:05.848677 | orchestrator | Wednesday 03 September 2025 00:51:15 +0000 (0:00:00.272) 0:08:52.382 *** 2025-09-03 00:53:05.848682 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.848687 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.848691 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.848696 | orchestrator | 2025-09-03 00:53:05.848705 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-03 00:53:05.848709 | orchestrator | Wednesday 03 September 2025 00:51:16 +0000 (0:00:00.296) 0:08:52.679 *** 2025-09-03 00:53:05.848714 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.848719 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.848723 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.848728 | orchestrator | 2025-09-03 00:53:05.848733 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-03 00:53:05.848737 | orchestrator | Wednesday 03 September 2025 00:51:16 +0000 (0:00:00.319) 0:08:52.998 *** 2025-09-03 00:53:05.848742 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.848747 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.848751 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.848756 | orchestrator | 2025-09-03 00:53:05.848760 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-03 00:53:05.848765 | orchestrator | Wednesday 03 September 2025 00:51:16 +0000 (0:00:00.440) 0:08:53.439 *** 2025-09-03 00:53:05.848770 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.848774 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.848779 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.848783 | orchestrator | 2025-09-03 00:53:05.848788 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-03 00:53:05.848793 | orchestrator | Wednesday 03 September 2025 00:51:17 +0000 (0:00:00.352) 0:08:53.792 *** 2025-09-03 00:53:05.848797 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.848805 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.848810 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.848815 | orchestrator | 2025-09-03 00:53:05.848819 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-03 00:53:05.848824 | orchestrator | Wednesday 03 September 2025 00:51:17 +0000 (0:00:00.331) 0:08:54.124 *** 2025-09-03 00:53:05.848829 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.848834 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.848838 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.848843 | orchestrator | 2025-09-03 00:53:05.848848 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-03 00:53:05.848852 | orchestrator | Wednesday 03 September 2025 00:51:17 +0000 (0:00:00.315) 0:08:54.439 *** 2025-09-03 00:53:05.848857 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.848862 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.848866 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.848871 | orchestrator | 2025-09-03 00:53:05.848875 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-03 00:53:05.848880 | orchestrator | Wednesday 03 September 2025 00:51:18 +0000 (0:00:00.584) 0:08:55.024 *** 2025-09-03 00:53:05.848885 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.848889 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.848894 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.848899 | orchestrator | 2025-09-03 00:53:05.848903 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-03 00:53:05.848908 | orchestrator | Wednesday 03 September 2025 00:51:18 +0000 (0:00:00.325) 0:08:55.350 *** 2025-09-03 00:53:05.848913 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.848917 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.848922 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.848927 | orchestrator | 2025-09-03 00:53:05.848931 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-03 00:53:05.848936 | orchestrator | Wednesday 03 September 2025 00:51:19 +0000 (0:00:00.567) 0:08:55.918 *** 2025-09-03 00:53:05.848940 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.848945 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.848950 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-03 00:53:05.848954 | orchestrator | 2025-09-03 00:53:05.848959 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-03 00:53:05.848989 | orchestrator | Wednesday 03 September 2025 00:51:20 +0000 (0:00:00.686) 0:08:56.604 *** 2025-09-03 00:53:05.848994 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-03 00:53:05.848998 | orchestrator | 2025-09-03 00:53:05.849003 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-03 00:53:05.849007 | orchestrator | Wednesday 03 September 2025 00:51:22 +0000 (0:00:02.181) 0:08:58.785 *** 2025-09-03 00:53:05.849013 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-03 00:53:05.849019 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.849023 | orchestrator | 2025-09-03 00:53:05.849028 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-03 00:53:05.849032 | orchestrator | Wednesday 03 September 2025 00:51:22 +0000 (0:00:00.178) 0:08:58.964 *** 2025-09-03 00:53:05.849038 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-03 00:53:05.849048 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-03 00:53:05.849056 | orchestrator | 2025-09-03 00:53:05.849064 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-03 00:53:05.849069 | orchestrator | Wednesday 03 September 2025 00:51:30 +0000 (0:00:07.659) 0:09:06.624 *** 2025-09-03 00:53:05.849073 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-03 00:53:05.849078 | orchestrator | 2025-09-03 00:53:05.849086 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-03 00:53:05.849091 | orchestrator | Wednesday 03 September 2025 00:51:33 +0000 (0:00:03.671) 0:09:10.295 *** 2025-09-03 00:53:05.849096 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.849100 | orchestrator | 2025-09-03 00:53:05.849105 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-03 00:53:05.849109 | orchestrator | Wednesday 03 September 2025 00:51:34 +0000 (0:00:00.830) 0:09:11.125 *** 2025-09-03 00:53:05.849114 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-03 00:53:05.849118 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-03 00:53:05.849123 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-03 00:53:05.849127 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-03 00:53:05.849132 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-03 00:53:05.849137 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-03 00:53:05.849141 | orchestrator | 2025-09-03 00:53:05.849145 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-03 00:53:05.849150 | orchestrator | Wednesday 03 September 2025 00:51:35 +0000 (0:00:01.007) 0:09:12.132 *** 2025-09-03 00:53:05.849154 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:53:05.849158 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-03 00:53:05.849162 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-03 00:53:05.849166 | orchestrator | 2025-09-03 00:53:05.849170 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-03 00:53:05.849174 | orchestrator | Wednesday 03 September 2025 00:51:37 +0000 (0:00:02.139) 0:09:14.271 *** 2025-09-03 00:53:05.849179 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-03 00:53:05.849183 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-03 00:53:05.849187 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.849191 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-03 00:53:05.849195 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-03 00:53:05.849200 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.849204 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-03 00:53:05.849208 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-03 00:53:05.849212 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.849216 | orchestrator | 2025-09-03 00:53:05.849221 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-03 00:53:05.849225 | orchestrator | Wednesday 03 September 2025 00:51:38 +0000 (0:00:01.157) 0:09:15.428 *** 2025-09-03 00:53:05.849229 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.849233 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.849237 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.849242 | orchestrator | 2025-09-03 00:53:05.849246 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-03 00:53:05.849250 | orchestrator | Wednesday 03 September 2025 00:51:41 +0000 (0:00:02.585) 0:09:18.014 *** 2025-09-03 00:53:05.849254 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.849258 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.849262 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.849270 | orchestrator | 2025-09-03 00:53:05.849274 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-03 00:53:05.849278 | orchestrator | Wednesday 03 September 2025 00:51:42 +0000 (0:00:00.692) 0:09:18.707 *** 2025-09-03 00:53:05.849283 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.849287 | orchestrator | 2025-09-03 00:53:05.849291 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-03 00:53:05.849295 | orchestrator | Wednesday 03 September 2025 00:51:42 +0000 (0:00:00.561) 0:09:19.269 *** 2025-09-03 00:53:05.849299 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.849304 | orchestrator | 2025-09-03 00:53:05.849308 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-03 00:53:05.849312 | orchestrator | Wednesday 03 September 2025 00:51:43 +0000 (0:00:00.792) 0:09:20.061 *** 2025-09-03 00:53:05.849316 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.849320 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.849325 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.849329 | orchestrator | 2025-09-03 00:53:05.849333 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-03 00:53:05.849337 | orchestrator | Wednesday 03 September 2025 00:51:44 +0000 (0:00:01.248) 0:09:21.310 *** 2025-09-03 00:53:05.849341 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.849345 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.849350 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.849354 | orchestrator | 2025-09-03 00:53:05.849358 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-03 00:53:05.849362 | orchestrator | Wednesday 03 September 2025 00:51:45 +0000 (0:00:01.106) 0:09:22.417 *** 2025-09-03 00:53:05.849366 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.849371 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.849375 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.849379 | orchestrator | 2025-09-03 00:53:05.849383 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-03 00:53:05.849390 | orchestrator | Wednesday 03 September 2025 00:51:47 +0000 (0:00:01.605) 0:09:24.023 *** 2025-09-03 00:53:05.849394 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.849399 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.849403 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.849407 | orchestrator | 2025-09-03 00:53:05.849414 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-03 00:53:05.849419 | orchestrator | Wednesday 03 September 2025 00:51:49 +0000 (0:00:02.193) 0:09:26.216 *** 2025-09-03 00:53:05.849423 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.849427 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.849431 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.849436 | orchestrator | 2025-09-03 00:53:05.849440 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-03 00:53:05.849444 | orchestrator | Wednesday 03 September 2025 00:51:50 +0000 (0:00:01.189) 0:09:27.406 *** 2025-09-03 00:53:05.849448 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.849452 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.849457 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.849461 | orchestrator | 2025-09-03 00:53:05.849465 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-03 00:53:05.849470 | orchestrator | Wednesday 03 September 2025 00:51:51 +0000 (0:00:00.998) 0:09:28.404 *** 2025-09-03 00:53:05.849474 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.849478 | orchestrator | 2025-09-03 00:53:05.849482 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-03 00:53:05.849486 | orchestrator | Wednesday 03 September 2025 00:51:52 +0000 (0:00:00.516) 0:09:28.921 *** 2025-09-03 00:53:05.849493 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.849497 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.849502 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.849506 | orchestrator | 2025-09-03 00:53:05.849510 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-03 00:53:05.849514 | orchestrator | Wednesday 03 September 2025 00:51:52 +0000 (0:00:00.293) 0:09:29.215 *** 2025-09-03 00:53:05.849518 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.849523 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.849527 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.849531 | orchestrator | 2025-09-03 00:53:05.849535 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-03 00:53:05.849540 | orchestrator | Wednesday 03 September 2025 00:51:54 +0000 (0:00:01.458) 0:09:30.673 *** 2025-09-03 00:53:05.849544 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-03 00:53:05.849548 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-03 00:53:05.849552 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-03 00:53:05.849556 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.849561 | orchestrator | 2025-09-03 00:53:05.849565 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-03 00:53:05.849569 | orchestrator | Wednesday 03 September 2025 00:51:54 +0000 (0:00:00.619) 0:09:31.292 *** 2025-09-03 00:53:05.849573 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.849577 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.849582 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.849586 | orchestrator | 2025-09-03 00:53:05.849590 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-03 00:53:05.849594 | orchestrator | 2025-09-03 00:53:05.849599 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-03 00:53:05.849603 | orchestrator | Wednesday 03 September 2025 00:51:55 +0000 (0:00:00.546) 0:09:31.839 *** 2025-09-03 00:53:05.849607 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.849611 | orchestrator | 2025-09-03 00:53:05.849615 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-03 00:53:05.849620 | orchestrator | Wednesday 03 September 2025 00:51:56 +0000 (0:00:00.744) 0:09:32.583 *** 2025-09-03 00:53:05.849624 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.849628 | orchestrator | 2025-09-03 00:53:05.849632 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-03 00:53:05.849637 | orchestrator | Wednesday 03 September 2025 00:51:56 +0000 (0:00:00.523) 0:09:33.107 *** 2025-09-03 00:53:05.849641 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.849645 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.849649 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.849653 | orchestrator | 2025-09-03 00:53:05.849658 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-03 00:53:05.849662 | orchestrator | Wednesday 03 September 2025 00:51:57 +0000 (0:00:00.493) 0:09:33.600 *** 2025-09-03 00:53:05.849666 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.849670 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.849674 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.849678 | orchestrator | 2025-09-03 00:53:05.849683 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-03 00:53:05.849687 | orchestrator | Wednesday 03 September 2025 00:51:57 +0000 (0:00:00.726) 0:09:34.327 *** 2025-09-03 00:53:05.849691 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.849695 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.849700 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.849704 | orchestrator | 2025-09-03 00:53:05.849708 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-03 00:53:05.849715 | orchestrator | Wednesday 03 September 2025 00:51:58 +0000 (0:00:00.704) 0:09:35.032 *** 2025-09-03 00:53:05.849719 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.849723 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.849727 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.849732 | orchestrator | 2025-09-03 00:53:05.849736 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-03 00:53:05.849740 | orchestrator | Wednesday 03 September 2025 00:51:59 +0000 (0:00:00.722) 0:09:35.754 *** 2025-09-03 00:53:05.849748 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.849753 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.849757 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.849761 | orchestrator | 2025-09-03 00:53:05.849765 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-03 00:53:05.849773 | orchestrator | Wednesday 03 September 2025 00:51:59 +0000 (0:00:00.538) 0:09:36.293 *** 2025-09-03 00:53:05.849777 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.849782 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.849786 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.849790 | orchestrator | 2025-09-03 00:53:05.849795 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-03 00:53:05.849799 | orchestrator | Wednesday 03 September 2025 00:52:00 +0000 (0:00:00.333) 0:09:36.627 *** 2025-09-03 00:53:05.849803 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.849807 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.849812 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.849816 | orchestrator | 2025-09-03 00:53:05.849820 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-03 00:53:05.849824 | orchestrator | Wednesday 03 September 2025 00:52:00 +0000 (0:00:00.339) 0:09:36.966 *** 2025-09-03 00:53:05.849829 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.849833 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.849837 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.849841 | orchestrator | 2025-09-03 00:53:05.849846 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-03 00:53:05.849850 | orchestrator | Wednesday 03 September 2025 00:52:01 +0000 (0:00:00.778) 0:09:37.745 *** 2025-09-03 00:53:05.849854 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.849858 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.849863 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.849867 | orchestrator | 2025-09-03 00:53:05.849871 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-03 00:53:05.849875 | orchestrator | Wednesday 03 September 2025 00:52:02 +0000 (0:00:01.080) 0:09:38.825 *** 2025-09-03 00:53:05.849879 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.849884 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.849888 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.849892 | orchestrator | 2025-09-03 00:53:05.849896 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-03 00:53:05.849900 | orchestrator | Wednesday 03 September 2025 00:52:02 +0000 (0:00:00.331) 0:09:39.157 *** 2025-09-03 00:53:05.849905 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.849909 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.849913 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.849917 | orchestrator | 2025-09-03 00:53:05.849921 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-03 00:53:05.849926 | orchestrator | Wednesday 03 September 2025 00:52:02 +0000 (0:00:00.316) 0:09:39.474 *** 2025-09-03 00:53:05.849930 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.849934 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.849938 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.849943 | orchestrator | 2025-09-03 00:53:05.849947 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-03 00:53:05.849951 | orchestrator | Wednesday 03 September 2025 00:52:03 +0000 (0:00:00.336) 0:09:39.810 *** 2025-09-03 00:53:05.849958 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.849974 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.849979 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.849983 | orchestrator | 2025-09-03 00:53:05.849987 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-03 00:53:05.849991 | orchestrator | Wednesday 03 September 2025 00:52:03 +0000 (0:00:00.632) 0:09:40.443 *** 2025-09-03 00:53:05.849996 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.850000 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.850004 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.850008 | orchestrator | 2025-09-03 00:53:05.850012 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-03 00:53:05.850029 | orchestrator | Wednesday 03 September 2025 00:52:04 +0000 (0:00:00.329) 0:09:40.773 *** 2025-09-03 00:53:05.850034 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.850038 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.850042 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.850047 | orchestrator | 2025-09-03 00:53:05.850051 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-03 00:53:05.850055 | orchestrator | Wednesday 03 September 2025 00:52:04 +0000 (0:00:00.303) 0:09:41.076 *** 2025-09-03 00:53:05.850059 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.850064 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.850068 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.850072 | orchestrator | 2025-09-03 00:53:05.850076 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-03 00:53:05.850080 | orchestrator | Wednesday 03 September 2025 00:52:04 +0000 (0:00:00.287) 0:09:41.364 *** 2025-09-03 00:53:05.850084 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.850089 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.850093 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.850097 | orchestrator | 2025-09-03 00:53:05.850101 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-03 00:53:05.850105 | orchestrator | Wednesday 03 September 2025 00:52:05 +0000 (0:00:00.780) 0:09:42.144 *** 2025-09-03 00:53:05.850110 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.850114 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.850118 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.850122 | orchestrator | 2025-09-03 00:53:05.850126 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-03 00:53:05.850131 | orchestrator | Wednesday 03 September 2025 00:52:05 +0000 (0:00:00.351) 0:09:42.495 *** 2025-09-03 00:53:05.850135 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.850139 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.850143 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.850147 | orchestrator | 2025-09-03 00:53:05.850152 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-03 00:53:05.850156 | orchestrator | Wednesday 03 September 2025 00:52:06 +0000 (0:00:00.527) 0:09:43.023 *** 2025-09-03 00:53:05.850174 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.850178 | orchestrator | 2025-09-03 00:53:05.850183 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-03 00:53:05.850187 | orchestrator | Wednesday 03 September 2025 00:52:07 +0000 (0:00:00.761) 0:09:43.785 *** 2025-09-03 00:53:05.850195 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:53:05.850199 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-03 00:53:05.850204 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-03 00:53:05.850208 | orchestrator | 2025-09-03 00:53:05.850212 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-03 00:53:05.850216 | orchestrator | Wednesday 03 September 2025 00:52:09 +0000 (0:00:02.143) 0:09:45.928 *** 2025-09-03 00:53:05.850221 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-03 00:53:05.850229 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-03 00:53:05.850233 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.850237 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-03 00:53:05.850242 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-03 00:53:05.850246 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.850250 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-03 00:53:05.850255 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-03 00:53:05.850259 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.850263 | orchestrator | 2025-09-03 00:53:05.850267 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-03 00:53:05.850272 | orchestrator | Wednesday 03 September 2025 00:52:10 +0000 (0:00:01.169) 0:09:47.098 *** 2025-09-03 00:53:05.850276 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.850280 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.850284 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.850289 | orchestrator | 2025-09-03 00:53:05.850293 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-03 00:53:05.850297 | orchestrator | Wednesday 03 September 2025 00:52:10 +0000 (0:00:00.305) 0:09:47.403 *** 2025-09-03 00:53:05.850301 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.850306 | orchestrator | 2025-09-03 00:53:05.850310 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-03 00:53:05.850314 | orchestrator | Wednesday 03 September 2025 00:52:11 +0000 (0:00:00.712) 0:09:48.115 *** 2025-09-03 00:53:05.850318 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-03 00:53:05.850323 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-03 00:53:05.850327 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-03 00:53:05.850331 | orchestrator | 2025-09-03 00:53:05.850336 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-03 00:53:05.850340 | orchestrator | Wednesday 03 September 2025 00:52:12 +0000 (0:00:00.844) 0:09:48.960 *** 2025-09-03 00:53:05.850344 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:53:05.850348 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-03 00:53:05.850353 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:53:05.850357 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-03 00:53:05.850361 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:53:05.850366 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-03 00:53:05.850370 | orchestrator | 2025-09-03 00:53:05.850374 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-03 00:53:05.850379 | orchestrator | Wednesday 03 September 2025 00:52:16 +0000 (0:00:04.237) 0:09:53.197 *** 2025-09-03 00:53:05.850383 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:53:05.850387 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-03 00:53:05.850391 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:53:05.850396 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-03 00:53:05.850403 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:53:05.850407 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-03 00:53:05.850411 | orchestrator | 2025-09-03 00:53:05.850415 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-03 00:53:05.850420 | orchestrator | Wednesday 03 September 2025 00:52:19 +0000 (0:00:02.768) 0:09:55.966 *** 2025-09-03 00:53:05.850424 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-03 00:53:05.850428 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.850432 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-03 00:53:05.850437 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.850441 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-03 00:53:05.850445 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.850449 | orchestrator | 2025-09-03 00:53:05.850456 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-03 00:53:05.850461 | orchestrator | Wednesday 03 September 2025 00:52:20 +0000 (0:00:01.194) 0:09:57.160 *** 2025-09-03 00:53:05.850469 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-03 00:53:05.850473 | orchestrator | 2025-09-03 00:53:05.850477 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-03 00:53:05.850482 | orchestrator | Wednesday 03 September 2025 00:52:20 +0000 (0:00:00.238) 0:09:57.399 *** 2025-09-03 00:53:05.850486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-03 00:53:05.850490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-03 00:53:05.850494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-03 00:53:05.850499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-03 00:53:05.850503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-03 00:53:05.850507 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.850511 | orchestrator | 2025-09-03 00:53:05.850516 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-03 00:53:05.850520 | orchestrator | Wednesday 03 September 2025 00:52:21 +0000 (0:00:00.612) 0:09:58.011 *** 2025-09-03 00:53:05.850524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-03 00:53:05.850528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-03 00:53:05.850533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-03 00:53:05.850537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-03 00:53:05.850541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-03 00:53:05.850545 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.850549 | orchestrator | 2025-09-03 00:53:05.850554 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-03 00:53:05.850558 | orchestrator | Wednesday 03 September 2025 00:52:22 +0000 (0:00:00.603) 0:09:58.614 *** 2025-09-03 00:53:05.850562 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-03 00:53:05.850566 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-03 00:53:05.850576 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-03 00:53:05.850580 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-03 00:53:05.850584 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-03 00:53:05.850588 | orchestrator | 2025-09-03 00:53:05.850592 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-03 00:53:05.850597 | orchestrator | Wednesday 03 September 2025 00:52:52 +0000 (0:00:30.744) 0:10:29.359 *** 2025-09-03 00:53:05.850601 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.850605 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.850609 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.850614 | orchestrator | 2025-09-03 00:53:05.850618 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-03 00:53:05.850622 | orchestrator | Wednesday 03 September 2025 00:52:53 +0000 (0:00:00.297) 0:10:29.656 *** 2025-09-03 00:53:05.850626 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.850631 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.850635 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.850639 | orchestrator | 2025-09-03 00:53:05.850643 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-03 00:53:05.850647 | orchestrator | Wednesday 03 September 2025 00:52:53 +0000 (0:00:00.591) 0:10:30.247 *** 2025-09-03 00:53:05.850652 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.850656 | orchestrator | 2025-09-03 00:53:05.850660 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-03 00:53:05.850664 | orchestrator | Wednesday 03 September 2025 00:52:54 +0000 (0:00:00.563) 0:10:30.811 *** 2025-09-03 00:53:05.850671 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.850676 | orchestrator | 2025-09-03 00:53:05.850680 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-03 00:53:05.850684 | orchestrator | Wednesday 03 September 2025 00:52:55 +0000 (0:00:00.847) 0:10:31.658 *** 2025-09-03 00:53:05.850692 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.850696 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.850700 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.850705 | orchestrator | 2025-09-03 00:53:05.850709 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-03 00:53:05.850713 | orchestrator | Wednesday 03 September 2025 00:52:56 +0000 (0:00:01.257) 0:10:32.916 *** 2025-09-03 00:53:05.850717 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.850721 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.850726 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.850730 | orchestrator | 2025-09-03 00:53:05.850734 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-03 00:53:05.850738 | orchestrator | Wednesday 03 September 2025 00:52:57 +0000 (0:00:01.169) 0:10:34.086 *** 2025-09-03 00:53:05.850742 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:53:05.850747 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:53:05.850751 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:53:05.850755 | orchestrator | 2025-09-03 00:53:05.850759 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-03 00:53:05.850764 | orchestrator | Wednesday 03 September 2025 00:52:59 +0000 (0:00:01.718) 0:10:35.804 *** 2025-09-03 00:53:05.850768 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-03 00:53:05.850776 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-03 00:53:05.850780 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-03 00:53:05.850784 | orchestrator | 2025-09-03 00:53:05.850789 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-03 00:53:05.850793 | orchestrator | Wednesday 03 September 2025 00:53:01 +0000 (0:00:02.588) 0:10:38.393 *** 2025-09-03 00:53:05.850797 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.850801 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.850805 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.850810 | orchestrator | 2025-09-03 00:53:05.850814 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-03 00:53:05.850818 | orchestrator | Wednesday 03 September 2025 00:53:02 +0000 (0:00:00.301) 0:10:38.694 *** 2025-09-03 00:53:05.850822 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:53:05.850826 | orchestrator | 2025-09-03 00:53:05.850831 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-03 00:53:05.850835 | orchestrator | Wednesday 03 September 2025 00:53:02 +0000 (0:00:00.707) 0:10:39.402 *** 2025-09-03 00:53:05.850839 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.850843 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.850848 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.850852 | orchestrator | 2025-09-03 00:53:05.850856 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-03 00:53:05.850860 | orchestrator | Wednesday 03 September 2025 00:53:03 +0000 (0:00:00.274) 0:10:39.676 *** 2025-09-03 00:53:05.850864 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.850869 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:53:05.850873 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:53:05.850877 | orchestrator | 2025-09-03 00:53:05.850881 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-03 00:53:05.850885 | orchestrator | Wednesday 03 September 2025 00:53:03 +0000 (0:00:00.314) 0:10:39.991 *** 2025-09-03 00:53:05.850890 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-03 00:53:05.850894 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-03 00:53:05.850898 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-03 00:53:05.850902 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:53:05.850907 | orchestrator | 2025-09-03 00:53:05.850911 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-03 00:53:05.850915 | orchestrator | Wednesday 03 September 2025 00:53:04 +0000 (0:00:00.890) 0:10:40.881 *** 2025-09-03 00:53:05.850919 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:53:05.850923 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:53:05.850928 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:53:05.850932 | orchestrator | 2025-09-03 00:53:05.850936 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:53:05.850940 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-09-03 00:53:05.850945 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-03 00:53:05.850949 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-03 00:53:05.850953 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-09-03 00:53:05.850974 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-03 00:53:05.850979 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-03 00:53:05.850984 | orchestrator | 2025-09-03 00:53:05.850988 | orchestrator | 2025-09-03 00:53:05.850992 | orchestrator | 2025-09-03 00:53:05.850999 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:53:05.851003 | orchestrator | Wednesday 03 September 2025 00:53:04 +0000 (0:00:00.180) 0:10:41.062 *** 2025-09-03 00:53:05.851008 | orchestrator | =============================================================================== 2025-09-03 00:53:05.851012 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 55.76s 2025-09-03 00:53:05.851016 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 44.96s 2025-09-03 00:53:05.851020 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.74s 2025-09-03 00:53:05.851024 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.53s 2025-09-03 00:53:05.851028 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.81s 2025-09-03 00:53:05.851032 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.22s 2025-09-03 00:53:05.851036 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.92s 2025-09-03 00:53:05.851041 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.65s 2025-09-03 00:53:05.851045 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.35s 2025-09-03 00:53:05.851049 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.66s 2025-09-03 00:53:05.851053 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.69s 2025-09-03 00:53:05.851057 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.35s 2025-09-03 00:53:05.851061 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.84s 2025-09-03 00:53:05.851065 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.24s 2025-09-03 00:53:05.851069 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.94s 2025-09-03 00:53:05.851073 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.67s 2025-09-03 00:53:05.851078 | orchestrator | ceph-handler : Restart the ceph-crash service --------------------------- 3.55s 2025-09-03 00:53:05.851082 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.50s 2025-09-03 00:53:05.851086 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.47s 2025-09-03 00:53:05.851090 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.40s 2025-09-03 00:53:05.851094 | orchestrator | 2025-09-03 00:53:05 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:53:08.878164 | orchestrator | 2025-09-03 00:53:08 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:53:08.880198 | orchestrator | 2025-09-03 00:53:08 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:53:08.881933 | orchestrator | 2025-09-03 00:53:08 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:53:08.882393 | orchestrator | 2025-09-03 00:53:08 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:53:11.931844 | orchestrator | 2025-09-03 00:53:11 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:53:11.933279 | orchestrator | 2025-09-03 00:53:11 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:53:11.934363 | orchestrator | 2025-09-03 00:53:11 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:53:11.934727 | orchestrator | 2025-09-03 00:53:11 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:53:14.979777 | orchestrator | 2025-09-03 00:53:14 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:53:14.984339 | orchestrator | 2025-09-03 00:53:14 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:53:14.987294 | orchestrator | 2025-09-03 00:53:14 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:53:14.988283 | orchestrator | 2025-09-03 00:53:14 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:53:18.040112 | orchestrator | 2025-09-03 00:53:18 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:53:18.040223 | orchestrator | 2025-09-03 00:53:18 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:53:18.040369 | orchestrator | 2025-09-03 00:53:18 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:53:18.040389 | orchestrator | 2025-09-03 00:53:18 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:53:21.077739 | orchestrator | 2025-09-03 00:53:21 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:53:21.078699 | orchestrator | 2025-09-03 00:53:21 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:53:21.080796 | orchestrator | 2025-09-03 00:53:21 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:53:21.081051 | orchestrator | 2025-09-03 00:53:21 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:53:24.123768 | orchestrator | 2025-09-03 00:53:24 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:53:24.125122 | orchestrator | 2025-09-03 00:53:24 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:53:24.126326 | orchestrator | 2025-09-03 00:53:24 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:53:24.126361 | orchestrator | 2025-09-03 00:53:24 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:53:27.169939 | orchestrator | 2025-09-03 00:53:27 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:53:27.170861 | orchestrator | 2025-09-03 00:53:27 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:53:27.173611 | orchestrator | 2025-09-03 00:53:27 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:53:27.173623 | orchestrator | 2025-09-03 00:53:27 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:53:30.235900 | orchestrator | 2025-09-03 00:53:30 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:53:30.239057 | orchestrator | 2025-09-03 00:53:30 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:53:30.240379 | orchestrator | 2025-09-03 00:53:30 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:53:30.240614 | orchestrator | 2025-09-03 00:53:30 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:53:33.285391 | orchestrator | 2025-09-03 00:53:33 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:53:33.285517 | orchestrator | 2025-09-03 00:53:33 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:53:33.286671 | orchestrator | 2025-09-03 00:53:33 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:53:33.286699 | orchestrator | 2025-09-03 00:53:33 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:53:36.335649 | orchestrator | 2025-09-03 00:53:36 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:53:36.337436 | orchestrator | 2025-09-03 00:53:36 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:53:36.339240 | orchestrator | 2025-09-03 00:53:36 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:53:36.340474 | orchestrator | 2025-09-03 00:53:36 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:53:39.384250 | orchestrator | 2025-09-03 00:53:39 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:53:39.386406 | orchestrator | 2025-09-03 00:53:39 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:53:39.388707 | orchestrator | 2025-09-03 00:53:39 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:53:39.389184 | orchestrator | 2025-09-03 00:53:39 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:53:42.437290 | orchestrator | 2025-09-03 00:53:42 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:53:42.439220 | orchestrator | 2025-09-03 00:53:42 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:53:42.441179 | orchestrator | 2025-09-03 00:53:42 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:53:42.441204 | orchestrator | 2025-09-03 00:53:42 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:53:45.502802 | orchestrator | 2025-09-03 00:53:45 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:53:45.502918 | orchestrator | 2025-09-03 00:53:45 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:53:45.504626 | orchestrator | 2025-09-03 00:53:45 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:53:45.504651 | orchestrator | 2025-09-03 00:53:45 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:53:48.545418 | orchestrator | 2025-09-03 00:53:48 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:53:48.546622 | orchestrator | 2025-09-03 00:53:48 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:53:48.548088 | orchestrator | 2025-09-03 00:53:48 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:53:48.548164 | orchestrator | 2025-09-03 00:53:48 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:53:51.599034 | orchestrator | 2025-09-03 00:53:51 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:53:51.601265 | orchestrator | 2025-09-03 00:53:51 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:53:51.603086 | orchestrator | 2025-09-03 00:53:51 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:53:51.603203 | orchestrator | 2025-09-03 00:53:51 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:53:54.645445 | orchestrator | 2025-09-03 00:53:54 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:53:54.646006 | orchestrator | 2025-09-03 00:53:54 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:53:54.647595 | orchestrator | 2025-09-03 00:53:54 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:53:54.647621 | orchestrator | 2025-09-03 00:53:54 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:53:57.691037 | orchestrator | 2025-09-03 00:53:57 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:53:57.692860 | orchestrator | 2025-09-03 00:53:57 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:53:57.695014 | orchestrator | 2025-09-03 00:53:57 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:53:57.695181 | orchestrator | 2025-09-03 00:53:57 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:54:00.746893 | orchestrator | 2025-09-03 00:54:00 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state STARTED 2025-09-03 00:54:00.747663 | orchestrator | 2025-09-03 00:54:00 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:54:00.749257 | orchestrator | 2025-09-03 00:54:00 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:54:00.749427 | orchestrator | 2025-09-03 00:54:00 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:54:03.797299 | orchestrator | 2025-09-03 00:54:03 | INFO  | Task dc4e6937-8cfe-476c-b3c6-dd57d24263be is in state SUCCESS 2025-09-03 00:54:03.798472 | orchestrator | 2025-09-03 00:54:03.798517 | orchestrator | 2025-09-03 00:54:03.798530 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 00:54:03.798543 | orchestrator | 2025-09-03 00:54:03.798555 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 00:54:03.798566 | orchestrator | Wednesday 03 September 2025 00:50:59 +0000 (0:00:00.240) 0:00:00.240 *** 2025-09-03 00:54:03.798578 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:54:03.798592 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:54:03.798604 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:54:03.798615 | orchestrator | 2025-09-03 00:54:03.798627 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 00:54:03.798638 | orchestrator | Wednesday 03 September 2025 00:50:59 +0000 (0:00:00.220) 0:00:00.460 *** 2025-09-03 00:54:03.798651 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-03 00:54:03.798663 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-03 00:54:03.798675 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-03 00:54:03.798686 | orchestrator | 2025-09-03 00:54:03.798697 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-03 00:54:03.798708 | orchestrator | 2025-09-03 00:54:03.798719 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-03 00:54:03.798730 | orchestrator | Wednesday 03 September 2025 00:50:59 +0000 (0:00:00.290) 0:00:00.750 *** 2025-09-03 00:54:03.798741 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:54:03.798753 | orchestrator | 2025-09-03 00:54:03.798764 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-03 00:54:03.798779 | orchestrator | Wednesday 03 September 2025 00:51:00 +0000 (0:00:00.369) 0:00:01.120 *** 2025-09-03 00:54:03.798790 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-03 00:54:03.798801 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-03 00:54:03.798812 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-03 00:54:03.798823 | orchestrator | 2025-09-03 00:54:03.798834 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-03 00:54:03.798845 | orchestrator | Wednesday 03 September 2025 00:51:00 +0000 (0:00:00.605) 0:00:01.725 *** 2025-09-03 00:54:03.798877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-03 00:54:03.798915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-03 00:54:03.798940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-03 00:54:03.798983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-03 00:54:03.799004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-03 00:54:03.799026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-03 00:54:03.799039 | orchestrator | 2025-09-03 00:54:03.799051 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-03 00:54:03.799065 | orchestrator | Wednesday 03 September 2025 00:51:02 +0000 (0:00:01.410) 0:00:03.136 *** 2025-09-03 00:54:03.799078 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:54:03.799124 | orchestrator | 2025-09-03 00:54:03.799138 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-03 00:54:03.799150 | orchestrator | Wednesday 03 September 2025 00:51:02 +0000 (0:00:00.463) 0:00:03.599 *** 2025-09-03 00:54:03.799174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-03 00:54:03.799190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-03 00:54:03.799210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-03 00:54:03.799234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-03 00:54:03.799256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-03 00:54:03.799272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-03 00:54:03.799285 | orchestrator | 2025-09-03 00:54:03.799298 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-03 00:54:03.799312 | orchestrator | Wednesday 03 September 2025 00:51:04 +0000 (0:00:02.411) 0:00:06.010 *** 2025-09-03 00:54:03.799333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-03 00:54:03.799440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-03 00:54:03.799466 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:54:03.799479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-03 00:54:03.799502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-03 00:54:03.799515 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:54:03.799531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-03 00:54:03.799551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-03 00:54:03.799563 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:54:03.799575 | orchestrator | 2025-09-03 00:54:03.799586 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-03 00:54:03.799597 | orchestrator | Wednesday 03 September 2025 00:51:06 +0000 (0:00:01.312) 0:00:07.323 *** 2025-09-03 00:54:03.799609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-03 00:54:03.799629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-03 00:54:03.799649 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:54:03.799661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-03 00:54:03.799678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-03 00:54:03.799690 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:54:03.799702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-03 00:54:03.799722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-03 00:54:03.799741 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:54:03.799753 | orchestrator | 2025-09-03 00:54:03.799764 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-03 00:54:03.799775 | orchestrator | Wednesday 03 September 2025 00:51:07 +0000 (0:00:00.785) 0:00:08.108 *** 2025-09-03 00:54:03.799787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-03 00:54:03.799804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-03 00:54:03.799816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-03 00:54:03.799835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-03 00:54:03.799848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-03 00:54:03.799873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-03 00:54:03.799886 | orchestrator | 2025-09-03 00:54:03.799897 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-03 00:54:03.799909 | orchestrator | Wednesday 03 September 2025 00:51:09 +0000 (0:00:02.313) 0:00:10.422 *** 2025-09-03 00:54:03.799920 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:54:03.799931 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:54:03.799942 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:54:03.799953 | orchestrator | 2025-09-03 00:54:03.799983 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-03 00:54:03.799994 | orchestrator | Wednesday 03 September 2025 00:51:12 +0000 (0:00:03.239) 0:00:13.661 *** 2025-09-03 00:54:03.800005 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:54:03.800016 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:54:03.800027 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:54:03.800038 | orchestrator | 2025-09-03 00:54:03.800049 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-03 00:54:03.800060 | orchestrator | Wednesday 03 September 2025 00:51:14 +0000 (0:00:01.698) 0:00:15.360 *** 2025-09-03 00:54:03.800072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-03 00:54:03.800090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-03 00:54:03.800110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-03 00:54:03.800127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-03 00:54:03.800140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-03 00:54:03.800160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-03 00:54:03.800179 | orchestrator | 2025-09-03 00:54:03.800190 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-03 00:54:03.800201 | orchestrator | Wednesday 03 September 2025 00:51:16 +0000 (0:00:02.265) 0:00:17.626 *** 2025-09-03 00:54:03.800212 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:54:03.800223 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:54:03.800235 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:54:03.800246 | orchestrator | 2025-09-03 00:54:03.800256 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-03 00:54:03.800267 | orchestrator | Wednesday 03 September 2025 00:51:16 +0000 (0:00:00.384) 0:00:18.010 *** 2025-09-03 00:54:03.800278 | orchestrator | 2025-09-03 00:54:03.800289 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-03 00:54:03.800299 | orchestrator | Wednesday 03 September 2025 00:51:17 +0000 (0:00:00.087) 0:00:18.098 *** 2025-09-03 00:54:03.800310 | orchestrator | 2025-09-03 00:54:03.800321 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-03 00:54:03.800332 | orchestrator | Wednesday 03 September 2025 00:51:17 +0000 (0:00:00.089) 0:00:18.187 *** 2025-09-03 00:54:03.800343 | orchestrator | 2025-09-03 00:54:03.800353 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-03 00:54:03.800364 | orchestrator | Wednesday 03 September 2025 00:51:17 +0000 (0:00:00.065) 0:00:18.253 *** 2025-09-03 00:54:03.800375 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:54:03.800386 | orchestrator | 2025-09-03 00:54:03.800397 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-03 00:54:03.800408 | orchestrator | Wednesday 03 September 2025 00:51:17 +0000 (0:00:00.190) 0:00:18.444 *** 2025-09-03 00:54:03.800418 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:54:03.800429 | orchestrator | 2025-09-03 00:54:03.800445 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-03 00:54:03.800457 | orchestrator | Wednesday 03 September 2025 00:51:18 +0000 (0:00:00.709) 0:00:19.154 *** 2025-09-03 00:54:03.800468 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:54:03.800478 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:54:03.800489 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:54:03.800500 | orchestrator | 2025-09-03 00:54:03.800511 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-03 00:54:03.800522 | orchestrator | Wednesday 03 September 2025 00:52:25 +0000 (0:01:07.477) 0:01:26.632 *** 2025-09-03 00:54:03.800532 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:54:03.800543 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:54:03.800554 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:54:03.800565 | orchestrator | 2025-09-03 00:54:03.800576 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-03 00:54:03.800586 | orchestrator | Wednesday 03 September 2025 00:53:50 +0000 (0:01:24.608) 0:02:51.240 *** 2025-09-03 00:54:03.800597 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:54:03.800608 | orchestrator | 2025-09-03 00:54:03.800619 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-03 00:54:03.800636 | orchestrator | Wednesday 03 September 2025 00:53:50 +0000 (0:00:00.543) 0:02:51.784 *** 2025-09-03 00:54:03.800647 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:54:03.800659 | orchestrator | 2025-09-03 00:54:03.800670 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-03 00:54:03.800681 | orchestrator | Wednesday 03 September 2025 00:53:53 +0000 (0:00:02.670) 0:02:54.455 *** 2025-09-03 00:54:03.800692 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:54:03.800703 | orchestrator | 2025-09-03 00:54:03.800714 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-03 00:54:03.800725 | orchestrator | Wednesday 03 September 2025 00:53:55 +0000 (0:00:02.207) 0:02:56.662 *** 2025-09-03 00:54:03.800735 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:54:03.800746 | orchestrator | 2025-09-03 00:54:03.800757 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-03 00:54:03.800768 | orchestrator | Wednesday 03 September 2025 00:53:58 +0000 (0:00:02.602) 0:02:59.265 *** 2025-09-03 00:54:03.800779 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:54:03.800790 | orchestrator | 2025-09-03 00:54:03.800801 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:54:03.800812 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-03 00:54:03.800824 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-03 00:54:03.800835 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-03 00:54:03.800846 | orchestrator | 2025-09-03 00:54:03.800857 | orchestrator | 2025-09-03 00:54:03.800868 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:54:03.800885 | orchestrator | Wednesday 03 September 2025 00:54:00 +0000 (0:00:02.571) 0:03:01.837 *** 2025-09-03 00:54:03.800896 | orchestrator | =============================================================================== 2025-09-03 00:54:03.800907 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 84.61s 2025-09-03 00:54:03.800918 | orchestrator | opensearch : Restart opensearch container ------------------------------ 67.48s 2025-09-03 00:54:03.800929 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.24s 2025-09-03 00:54:03.800940 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.67s 2025-09-03 00:54:03.800951 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.60s 2025-09-03 00:54:03.800990 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.57s 2025-09-03 00:54:03.801001 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.41s 2025-09-03 00:54:03.801012 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.31s 2025-09-03 00:54:03.801023 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.27s 2025-09-03 00:54:03.801034 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.21s 2025-09-03 00:54:03.801045 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.70s 2025-09-03 00:54:03.801056 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.41s 2025-09-03 00:54:03.801067 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.31s 2025-09-03 00:54:03.801078 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.79s 2025-09-03 00:54:03.801089 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.71s 2025-09-03 00:54:03.801100 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.61s 2025-09-03 00:54:03.801111 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2025-09-03 00:54:03.801152 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.46s 2025-09-03 00:54:03.801164 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.38s 2025-09-03 00:54:03.801175 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.37s 2025-09-03 00:54:03.801316 | orchestrator | 2025-09-03 00:54:03 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:54:03.801335 | orchestrator | 2025-09-03 00:54:03 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:54:03.801763 | orchestrator | 2025-09-03 00:54:03 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:54:06.848197 | orchestrator | 2025-09-03 00:54:06 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:54:06.848369 | orchestrator | 2025-09-03 00:54:06 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state STARTED 2025-09-03 00:54:06.848388 | orchestrator | 2025-09-03 00:54:06 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:54:09.891340 | orchestrator | 2025-09-03 00:54:09 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:54:09.894220 | orchestrator | 2025-09-03 00:54:09.894254 | orchestrator | 2025-09-03 00:54:09 | INFO  | Task 9dc94f79-5744-45c7-ab6a-e86571d87258 is in state SUCCESS 2025-09-03 00:54:09.895871 | orchestrator | 2025-09-03 00:54:09.895905 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-03 00:54:09.895918 | orchestrator | 2025-09-03 00:54:09.895930 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-03 00:54:09.895942 | orchestrator | Wednesday 03 September 2025 00:50:59 +0000 (0:00:00.097) 0:00:00.097 *** 2025-09-03 00:54:09.895953 | orchestrator | ok: [localhost] => { 2025-09-03 00:54:09.895999 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-03 00:54:09.896011 | orchestrator | } 2025-09-03 00:54:09.896023 | orchestrator | 2025-09-03 00:54:09.896034 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-03 00:54:09.896045 | orchestrator | Wednesday 03 September 2025 00:50:59 +0000 (0:00:00.036) 0:00:00.133 *** 2025-09-03 00:54:09.896057 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-03 00:54:09.896069 | orchestrator | ...ignoring 2025-09-03 00:54:09.896081 | orchestrator | 2025-09-03 00:54:09.896092 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-03 00:54:09.896103 | orchestrator | Wednesday 03 September 2025 00:51:02 +0000 (0:00:02.799) 0:00:02.933 *** 2025-09-03 00:54:09.896115 | orchestrator | skipping: [localhost] 2025-09-03 00:54:09.896126 | orchestrator | 2025-09-03 00:54:09.896137 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-03 00:54:09.896148 | orchestrator | Wednesday 03 September 2025 00:51:02 +0000 (0:00:00.054) 0:00:02.988 *** 2025-09-03 00:54:09.896159 | orchestrator | ok: [localhost] 2025-09-03 00:54:09.896170 | orchestrator | 2025-09-03 00:54:09.896181 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 00:54:09.896192 | orchestrator | 2025-09-03 00:54:09.896203 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 00:54:09.896214 | orchestrator | Wednesday 03 September 2025 00:51:02 +0000 (0:00:00.145) 0:00:03.133 *** 2025-09-03 00:54:09.896225 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:54:09.896236 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:54:09.896247 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:54:09.896258 | orchestrator | 2025-09-03 00:54:09.896269 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 00:54:09.896280 | orchestrator | Wednesday 03 September 2025 00:51:02 +0000 (0:00:00.278) 0:00:03.412 *** 2025-09-03 00:54:09.896291 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-03 00:54:09.896329 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-03 00:54:09.896341 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-03 00:54:09.896352 | orchestrator | 2025-09-03 00:54:09.896363 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-03 00:54:09.896375 | orchestrator | 2025-09-03 00:54:09.896386 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-03 00:54:09.896397 | orchestrator | Wednesday 03 September 2025 00:51:02 +0000 (0:00:00.474) 0:00:03.886 *** 2025-09-03 00:54:09.896408 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-03 00:54:09.896419 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-03 00:54:09.896430 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-03 00:54:09.896441 | orchestrator | 2025-09-03 00:54:09.896452 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-03 00:54:09.896463 | orchestrator | Wednesday 03 September 2025 00:51:03 +0000 (0:00:00.315) 0:00:04.201 *** 2025-09-03 00:54:09.896473 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:54:09.896485 | orchestrator | 2025-09-03 00:54:09.896496 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-03 00:54:09.896508 | orchestrator | Wednesday 03 September 2025 00:51:03 +0000 (0:00:00.445) 0:00:04.646 *** 2025-09-03 00:54:09.896552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-03 00:54:09.896571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-03 00:54:09.896602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-03 00:54:09.896616 | orchestrator | 2025-09-03 00:54:09.896633 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-03 00:54:09.896645 | orchestrator | Wednesday 03 September 2025 00:51:07 +0000 (0:00:03.262) 0:00:07.909 *** 2025-09-03 00:54:09.896656 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:54:09.896668 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:54:09.896679 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:54:09.896690 | orchestrator | 2025-09-03 00:54:09.896701 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-03 00:54:09.896712 | orchestrator | Wednesday 03 September 2025 00:51:07 +0000 (0:00:00.593) 0:00:08.502 *** 2025-09-03 00:54:09.896722 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:54:09.896733 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:54:09.896744 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:54:09.896755 | orchestrator | 2025-09-03 00:54:09.896766 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-03 00:54:09.896777 | orchestrator | Wednesday 03 September 2025 00:51:08 +0000 (0:00:01.331) 0:00:09.833 *** 2025-09-03 00:54:09.896796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-03 00:54:09.896827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-03 00:54:09.896842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-03 00:54:09.896861 | orchestrator | 2025-09-03 00:54:09.896873 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-03 00:54:09.896884 | orchestrator | Wednesday 03 September 2025 00:51:12 +0000 (0:00:03.997) 0:00:13.831 *** 2025-09-03 00:54:09.896895 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:54:09.896906 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:54:09.896917 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:54:09.896928 | orchestrator | 2025-09-03 00:54:09.896939 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-03 00:54:09.896949 | orchestrator | Wednesday 03 September 2025 00:51:14 +0000 (0:00:01.121) 0:00:14.953 *** 2025-09-03 00:54:09.896979 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:54:09.896990 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:54:09.897001 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:54:09.897012 | orchestrator | 2025-09-03 00:54:09.897023 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-03 00:54:09.897034 | orchestrator | Wednesday 03 September 2025 00:51:18 +0000 (0:00:04.596) 0:00:19.549 *** 2025-09-03 00:54:09.897045 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:54:09.897056 | orchestrator | 2025-09-03 00:54:09.897067 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-03 00:54:09.897078 | orchestrator | Wednesday 03 September 2025 00:51:19 +0000 (0:00:00.682) 0:00:20.231 *** 2025-09-03 00:54:09.897104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-03 00:54:09.897124 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:54:09.897137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-03 00:54:09.897149 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:54:09.897172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-03 00:54:09.897195 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:54:09.897206 | orchestrator | 2025-09-03 00:54:09.897217 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-03 00:54:09.897228 | orchestrator | Wednesday 03 September 2025 00:51:22 +0000 (0:00:02.857) 0:00:23.089 *** 2025-09-03 00:54:09.897240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-03 00:54:09.897252 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:54:09.897275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-03 00:54:09.897294 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:54:09.897307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-03 00:54:09.897319 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:54:09.897330 | orchestrator | 2025-09-03 00:54:09.897341 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-03 00:54:09.897352 | orchestrator | Wednesday 03 September 2025 00:51:24 +0000 (0:00:02.318) 0:00:25.408 *** 2025-09-03 00:54:09.897368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-03 00:54:09.897387 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:54:09.897407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-03 00:54:09.897420 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:54:09.897436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-03 00:54:09.897455 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:54:09.897466 | orchestrator | 2025-09-03 00:54:09.897477 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-03 00:54:09.897488 | orchestrator | Wednesday 03 September 2025 00:51:27 +0000 (0:00:02.516) 0:00:27.924 *** 2025-09-03 00:54:09.897507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-03 00:54:09.897526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-03 00:54:09.897553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-03 00:54:09.897566 | orchestrator | 2025-09-03 00:54:09.897577 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-03 00:54:09.897589 | orchestrator | Wednesday 03 September 2025 00:51:30 +0000 (0:00:03.553) 0:00:31.478 *** 2025-09-03 00:54:09.897600 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:54:09.897611 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:54:09.897622 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:54:09.897633 | orchestrator | 2025-09-03 00:54:09.897644 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-03 00:54:09.897655 | orchestrator | Wednesday 03 September 2025 00:51:31 +0000 (0:00:01.012) 0:00:32.490 *** 2025-09-03 00:54:09.897666 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:54:09.897677 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:54:09.897688 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:54:09.897699 | orchestrator | 2025-09-03 00:54:09.897710 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-03 00:54:09.897721 | orchestrator | Wednesday 03 September 2025 00:51:32 +0000 (0:00:00.899) 0:00:33.390 *** 2025-09-03 00:54:09.897732 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:54:09.897743 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:54:09.897754 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:54:09.897765 | orchestrator | 2025-09-03 00:54:09.897776 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-03 00:54:09.897787 | orchestrator | Wednesday 03 September 2025 00:51:32 +0000 (0:00:00.469) 0:00:33.860 *** 2025-09-03 00:54:09.897799 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-03 00:54:09.897810 | orchestrator | ...ignoring 2025-09-03 00:54:09.897822 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-03 00:54:09.897833 | orchestrator | ...ignoring 2025-09-03 00:54:09.897844 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-03 00:54:09.897861 | orchestrator | ...ignoring 2025-09-03 00:54:09.897872 | orchestrator | 2025-09-03 00:54:09.897883 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-03 00:54:09.897894 | orchestrator | Wednesday 03 September 2025 00:51:43 +0000 (0:00:11.001) 0:00:44.861 *** 2025-09-03 00:54:09.897905 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:54:09.897916 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:54:09.897927 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:54:09.897938 | orchestrator | 2025-09-03 00:54:09.897949 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-03 00:54:09.897978 | orchestrator | Wednesday 03 September 2025 00:51:44 +0000 (0:00:00.412) 0:00:45.274 *** 2025-09-03 00:54:09.897989 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:54:09.898000 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:54:09.898011 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:54:09.898077 | orchestrator | 2025-09-03 00:54:09.898088 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-03 00:54:09.898107 | orchestrator | Wednesday 03 September 2025 00:51:45 +0000 (0:00:00.636) 0:00:45.910 *** 2025-09-03 00:54:09.898118 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:54:09.898129 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:54:09.898140 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:54:09.898151 | orchestrator | 2025-09-03 00:54:09.898162 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-03 00:54:09.898173 | orchestrator | Wednesday 03 September 2025 00:51:45 +0000 (0:00:00.505) 0:00:46.416 *** 2025-09-03 00:54:09.898184 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:54:09.898195 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:54:09.898206 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:54:09.898217 | orchestrator | 2025-09-03 00:54:09.898228 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-03 00:54:09.898238 | orchestrator | Wednesday 03 September 2025 00:51:45 +0000 (0:00:00.462) 0:00:46.878 *** 2025-09-03 00:54:09.898249 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:54:09.898260 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:54:09.898271 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:54:09.898282 | orchestrator | 2025-09-03 00:54:09.898293 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-03 00:54:09.898304 | orchestrator | Wednesday 03 September 2025 00:51:46 +0000 (0:00:00.410) 0:00:47.288 *** 2025-09-03 00:54:09.898322 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:54:09.898334 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:54:09.898345 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:54:09.898356 | orchestrator | 2025-09-03 00:54:09.898367 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-03 00:54:09.898377 | orchestrator | Wednesday 03 September 2025 00:51:47 +0000 (0:00:00.853) 0:00:48.142 *** 2025-09-03 00:54:09.898388 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:54:09.898399 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:54:09.898410 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-03 00:54:09.898421 | orchestrator | 2025-09-03 00:54:09.898432 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-03 00:54:09.898443 | orchestrator | Wednesday 03 September 2025 00:51:47 +0000 (0:00:00.367) 0:00:48.509 *** 2025-09-03 00:54:09.898454 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:54:09.898465 | orchestrator | 2025-09-03 00:54:09.898475 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-03 00:54:09.898486 | orchestrator | Wednesday 03 September 2025 00:51:57 +0000 (0:00:10.098) 0:00:58.608 *** 2025-09-03 00:54:09.898497 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:54:09.898508 | orchestrator | 2025-09-03 00:54:09.898519 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-03 00:54:09.898537 | orchestrator | Wednesday 03 September 2025 00:51:57 +0000 (0:00:00.114) 0:00:58.722 *** 2025-09-03 00:54:09.898548 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:54:09.898559 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:54:09.898570 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:54:09.898581 | orchestrator | 2025-09-03 00:54:09.898592 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-03 00:54:09.898603 | orchestrator | Wednesday 03 September 2025 00:51:58 +0000 (0:00:01.012) 0:00:59.735 *** 2025-09-03 00:54:09.898614 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:54:09.898625 | orchestrator | 2025-09-03 00:54:09.898635 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-03 00:54:09.898646 | orchestrator | Wednesday 03 September 2025 00:52:06 +0000 (0:00:07.767) 0:01:07.502 *** 2025-09-03 00:54:09.898657 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:54:09.898668 | orchestrator | 2025-09-03 00:54:09.898679 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-03 00:54:09.898690 | orchestrator | Wednesday 03 September 2025 00:52:09 +0000 (0:00:02.535) 0:01:10.037 *** 2025-09-03 00:54:09.898701 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:54:09.898711 | orchestrator | 2025-09-03 00:54:09.898722 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-03 00:54:09.898733 | orchestrator | Wednesday 03 September 2025 00:52:11 +0000 (0:00:02.448) 0:01:12.486 *** 2025-09-03 00:54:09.898744 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:54:09.898755 | orchestrator | 2025-09-03 00:54:09.898766 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-03 00:54:09.898777 | orchestrator | Wednesday 03 September 2025 00:52:11 +0000 (0:00:00.132) 0:01:12.618 *** 2025-09-03 00:54:09.898788 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:54:09.898799 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:54:09.898810 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:54:09.898821 | orchestrator | 2025-09-03 00:54:09.898832 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-03 00:54:09.898843 | orchestrator | Wednesday 03 September 2025 00:52:12 +0000 (0:00:00.347) 0:01:12.965 *** 2025-09-03 00:54:09.898854 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:54:09.898865 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-03 00:54:09.898875 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:54:09.898886 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:54:09.898897 | orchestrator | 2025-09-03 00:54:09.898908 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-03 00:54:09.898919 | orchestrator | skipping: no hosts matched 2025-09-03 00:54:09.898930 | orchestrator | 2025-09-03 00:54:09.898941 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-03 00:54:09.898952 | orchestrator | 2025-09-03 00:54:09.899028 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-03 00:54:09.899040 | orchestrator | Wednesday 03 September 2025 00:52:12 +0000 (0:00:00.544) 0:01:13.510 *** 2025-09-03 00:54:09.899051 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:54:09.899062 | orchestrator | 2025-09-03 00:54:09.899073 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-03 00:54:09.899084 | orchestrator | Wednesday 03 September 2025 00:52:31 +0000 (0:00:18.569) 0:01:32.079 *** 2025-09-03 00:54:09.899095 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:54:09.899106 | orchestrator | 2025-09-03 00:54:09.899116 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-03 00:54:09.899133 | orchestrator | Wednesday 03 September 2025 00:52:52 +0000 (0:00:21.689) 0:01:53.769 *** 2025-09-03 00:54:09.899144 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:54:09.899154 | orchestrator | 2025-09-03 00:54:09.899165 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-03 00:54:09.899176 | orchestrator | 2025-09-03 00:54:09.899187 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-03 00:54:09.899205 | orchestrator | Wednesday 03 September 2025 00:52:55 +0000 (0:00:02.341) 0:01:56.111 *** 2025-09-03 00:54:09.899216 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:54:09.899227 | orchestrator | 2025-09-03 00:54:09.899238 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-03 00:54:09.899249 | orchestrator | Wednesday 03 September 2025 00:53:18 +0000 (0:00:23.644) 0:02:19.755 *** 2025-09-03 00:54:09.899260 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:54:09.899270 | orchestrator | 2025-09-03 00:54:09.899281 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-03 00:54:09.899292 | orchestrator | Wednesday 03 September 2025 00:53:34 +0000 (0:00:15.587) 0:02:35.342 *** 2025-09-03 00:54:09.899303 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:54:09.899314 | orchestrator | 2025-09-03 00:54:09.899325 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-03 00:54:09.899336 | orchestrator | 2025-09-03 00:54:09.899353 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-03 00:54:09.899364 | orchestrator | Wednesday 03 September 2025 00:53:36 +0000 (0:00:02.473) 0:02:37.816 *** 2025-09-03 00:54:09.899375 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:54:09.899386 | orchestrator | 2025-09-03 00:54:09.899397 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-03 00:54:09.899408 | orchestrator | Wednesday 03 September 2025 00:53:53 +0000 (0:00:16.937) 0:02:54.753 *** 2025-09-03 00:54:09.899418 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:54:09.899429 | orchestrator | 2025-09-03 00:54:09.899440 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-03 00:54:09.899451 | orchestrator | Wednesday 03 September 2025 00:53:54 +0000 (0:00:00.558) 0:02:55.312 *** 2025-09-03 00:54:09.899461 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:54:09.899472 | orchestrator | 2025-09-03 00:54:09.899483 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-03 00:54:09.899494 | orchestrator | 2025-09-03 00:54:09.899505 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-03 00:54:09.899515 | orchestrator | Wednesday 03 September 2025 00:53:57 +0000 (0:00:02.626) 0:02:57.938 *** 2025-09-03 00:54:09.899526 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:54:09.899535 | orchestrator | 2025-09-03 00:54:09.899545 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-03 00:54:09.899555 | orchestrator | Wednesday 03 September 2025 00:53:57 +0000 (0:00:00.498) 0:02:58.437 *** 2025-09-03 00:54:09.899564 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:54:09.899574 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:54:09.899584 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:54:09.899593 | orchestrator | 2025-09-03 00:54:09.899603 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-03 00:54:09.899613 | orchestrator | Wednesday 03 September 2025 00:53:59 +0000 (0:00:02.125) 0:03:00.562 *** 2025-09-03 00:54:09.899622 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:54:09.899632 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:54:09.899642 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:54:09.899651 | orchestrator | 2025-09-03 00:54:09.899661 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-03 00:54:09.899671 | orchestrator | Wednesday 03 September 2025 00:54:01 +0000 (0:00:02.224) 0:03:02.787 *** 2025-09-03 00:54:09.899680 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:54:09.899690 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:54:09.899700 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:54:09.899709 | orchestrator | 2025-09-03 00:54:09.899719 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-03 00:54:09.899729 | orchestrator | Wednesday 03 September 2025 00:54:04 +0000 (0:00:02.166) 0:03:04.954 *** 2025-09-03 00:54:09.899738 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:54:09.899754 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:54:09.899764 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:54:09.899774 | orchestrator | 2025-09-03 00:54:09.899783 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-03 00:54:09.899793 | orchestrator | Wednesday 03 September 2025 00:54:06 +0000 (0:00:02.275) 0:03:07.229 *** 2025-09-03 00:54:09.899803 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:54:09.899813 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:54:09.899823 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:54:09.899832 | orchestrator | 2025-09-03 00:54:09.899842 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-03 00:54:09.899852 | orchestrator | Wednesday 03 September 2025 00:54:09 +0000 (0:00:02.814) 0:03:10.044 *** 2025-09-03 00:54:09.899861 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:54:09.899871 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:54:09.899881 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:54:09.899891 | orchestrator | 2025-09-03 00:54:09.899900 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:54:09.899910 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-03 00:54:09.899921 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-03 00:54:09.899932 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-03 00:54:09.899947 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-03 00:54:09.899973 | orchestrator | 2025-09-03 00:54:09.899984 | orchestrator | 2025-09-03 00:54:09.899993 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:54:09.900003 | orchestrator | Wednesday 03 September 2025 00:54:09 +0000 (0:00:00.390) 0:03:10.434 *** 2025-09-03 00:54:09.900013 | orchestrator | =============================================================================== 2025-09-03 00:54:09.900022 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 42.21s 2025-09-03 00:54:09.900032 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 37.28s 2025-09-03 00:54:09.900041 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 16.94s 2025-09-03 00:54:09.900051 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.00s 2025-09-03 00:54:09.900061 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.10s 2025-09-03 00:54:09.900070 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.77s 2025-09-03 00:54:09.900085 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.81s 2025-09-03 00:54:09.900095 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.60s 2025-09-03 00:54:09.900105 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.00s 2025-09-03 00:54:09.900114 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.55s 2025-09-03 00:54:09.900124 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.26s 2025-09-03 00:54:09.900134 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.86s 2025-09-03 00:54:09.900144 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.81s 2025-09-03 00:54:09.900153 | orchestrator | Check MariaDB service --------------------------------------------------- 2.80s 2025-09-03 00:54:09.900163 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.63s 2025-09-03 00:54:09.900172 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 2.54s 2025-09-03 00:54:09.900182 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.52s 2025-09-03 00:54:09.900198 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.45s 2025-09-03 00:54:09.900208 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.32s 2025-09-03 00:54:09.900217 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.28s 2025-09-03 00:54:09.900227 | orchestrator | 2025-09-03 00:54:09 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:54:12.954329 | orchestrator | 2025-09-03 00:54:12 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:54:12.955924 | orchestrator | 2025-09-03 00:54:12 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:54:12.957582 | orchestrator | 2025-09-03 00:54:12 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:54:12.957630 | orchestrator | 2025-09-03 00:54:12 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:54:15.987559 | orchestrator | 2025-09-03 00:54:15 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:54:15.987668 | orchestrator | 2025-09-03 00:54:15 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:54:15.988897 | orchestrator | 2025-09-03 00:54:15 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:54:15.988924 | orchestrator | 2025-09-03 00:54:15 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:54:19.036834 | orchestrator | 2025-09-03 00:54:19 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:54:19.037137 | orchestrator | 2025-09-03 00:54:19 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:54:19.037172 | orchestrator | 2025-09-03 00:54:19 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:54:19.037185 | orchestrator | 2025-09-03 00:54:19 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:54:22.087043 | orchestrator | 2025-09-03 00:54:22 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:54:22.089250 | orchestrator | 2025-09-03 00:54:22 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:54:22.091516 | orchestrator | 2025-09-03 00:54:22 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:54:22.092485 | orchestrator | 2025-09-03 00:54:22 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:54:25.130131 | orchestrator | 2025-09-03 00:54:25 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:54:25.130972 | orchestrator | 2025-09-03 00:54:25 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:54:25.132157 | orchestrator | 2025-09-03 00:54:25 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:54:25.132252 | orchestrator | 2025-09-03 00:54:25 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:54:28.168647 | orchestrator | 2025-09-03 00:54:28 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:54:28.169118 | orchestrator | 2025-09-03 00:54:28 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:54:28.169881 | orchestrator | 2025-09-03 00:54:28 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:54:28.169918 | orchestrator | 2025-09-03 00:54:28 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:54:31.208038 | orchestrator | 2025-09-03 00:54:31 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:54:31.209514 | orchestrator | 2025-09-03 00:54:31 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:54:31.210642 | orchestrator | 2025-09-03 00:54:31 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:54:31.210668 | orchestrator | 2025-09-03 00:54:31 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:54:34.262424 | orchestrator | 2025-09-03 00:54:34 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:54:34.263174 | orchestrator | 2025-09-03 00:54:34 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:54:34.264198 | orchestrator | 2025-09-03 00:54:34 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:54:34.264311 | orchestrator | 2025-09-03 00:54:34 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:54:37.304214 | orchestrator | 2025-09-03 00:54:37 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:54:37.304442 | orchestrator | 2025-09-03 00:54:37 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:54:37.308099 | orchestrator | 2025-09-03 00:54:37 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:54:37.308126 | orchestrator | 2025-09-03 00:54:37 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:54:40.356665 | orchestrator | 2025-09-03 00:54:40 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:54:40.358281 | orchestrator | 2025-09-03 00:54:40 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:54:40.358797 | orchestrator | 2025-09-03 00:54:40 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:54:40.358821 | orchestrator | 2025-09-03 00:54:40 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:54:43.412738 | orchestrator | 2025-09-03 00:54:43 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:54:43.415009 | orchestrator | 2025-09-03 00:54:43 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:54:43.416900 | orchestrator | 2025-09-03 00:54:43 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:54:43.417441 | orchestrator | 2025-09-03 00:54:43 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:54:46.468522 | orchestrator | 2025-09-03 00:54:46 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:54:46.470488 | orchestrator | 2025-09-03 00:54:46 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:54:46.472618 | orchestrator | 2025-09-03 00:54:46 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:54:46.473549 | orchestrator | 2025-09-03 00:54:46 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:54:49.535508 | orchestrator | 2025-09-03 00:54:49 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:54:49.537938 | orchestrator | 2025-09-03 00:54:49 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:54:49.541078 | orchestrator | 2025-09-03 00:54:49 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:54:49.541981 | orchestrator | 2025-09-03 00:54:49 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:54:52.578555 | orchestrator | 2025-09-03 00:54:52 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:54:52.582568 | orchestrator | 2025-09-03 00:54:52 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:54:52.582658 | orchestrator | 2025-09-03 00:54:52 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:54:52.582674 | orchestrator | 2025-09-03 00:54:52 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:54:55.625477 | orchestrator | 2025-09-03 00:54:55 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:54:55.625715 | orchestrator | 2025-09-03 00:54:55 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:54:55.628394 | orchestrator | 2025-09-03 00:54:55 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:54:55.628627 | orchestrator | 2025-09-03 00:54:55 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:54:58.672493 | orchestrator | 2025-09-03 00:54:58 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:54:58.672813 | orchestrator | 2025-09-03 00:54:58 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:54:58.674099 | orchestrator | 2025-09-03 00:54:58 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:54:58.674131 | orchestrator | 2025-09-03 00:54:58 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:55:01.717558 | orchestrator | 2025-09-03 00:55:01 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:55:01.718893 | orchestrator | 2025-09-03 00:55:01 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:55:01.720906 | orchestrator | 2025-09-03 00:55:01 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:55:01.720931 | orchestrator | 2025-09-03 00:55:01 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:55:04.767940 | orchestrator | 2025-09-03 00:55:04 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:55:04.770114 | orchestrator | 2025-09-03 00:55:04 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:55:04.772467 | orchestrator | 2025-09-03 00:55:04 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:55:04.772491 | orchestrator | 2025-09-03 00:55:04 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:55:07.813843 | orchestrator | 2025-09-03 00:55:07 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:55:07.815927 | orchestrator | 2025-09-03 00:55:07 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:55:07.817124 | orchestrator | 2025-09-03 00:55:07 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:55:07.817410 | orchestrator | 2025-09-03 00:55:07 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:55:10.860631 | orchestrator | 2025-09-03 00:55:10 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:55:10.862444 | orchestrator | 2025-09-03 00:55:10 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:55:10.863992 | orchestrator | 2025-09-03 00:55:10 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:55:10.864773 | orchestrator | 2025-09-03 00:55:10 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:55:13.910096 | orchestrator | 2025-09-03 00:55:13 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:55:13.911679 | orchestrator | 2025-09-03 00:55:13 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:55:13.913940 | orchestrator | 2025-09-03 00:55:13 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:55:13.913996 | orchestrator | 2025-09-03 00:55:13 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:55:16.960885 | orchestrator | 2025-09-03 00:55:16 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state STARTED 2025-09-03 00:55:16.961743 | orchestrator | 2025-09-03 00:55:16 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:55:16.963528 | orchestrator | 2025-09-03 00:55:16 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:55:16.963555 | orchestrator | 2025-09-03 00:55:16 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:55:20.027562 | orchestrator | 2025-09-03 00:55:20.027667 | orchestrator | 2025-09-03 00:55:20 | INFO  | Task cfa9ed72-9ddc-459e-9720-a08d833ef43a is in state SUCCESS 2025-09-03 00:55:20.029636 | orchestrator | 2025-09-03 00:55:20.029886 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-03 00:55:20.029905 | orchestrator | 2025-09-03 00:55:20.029945 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-03 00:55:20.029991 | orchestrator | Wednesday 03 September 2025 00:53:08 +0000 (0:00:00.580) 0:00:00.580 *** 2025-09-03 00:55:20.030009 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:55:20.030093 | orchestrator | 2025-09-03 00:55:20.030106 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-03 00:55:20.030117 | orchestrator | Wednesday 03 September 2025 00:53:09 +0000 (0:00:00.708) 0:00:01.289 *** 2025-09-03 00:55:20.030129 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:55:20.030143 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:55:20.030154 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:55:20.030166 | orchestrator | 2025-09-03 00:55:20.030177 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-03 00:55:20.030189 | orchestrator | Wednesday 03 September 2025 00:53:10 +0000 (0:00:00.642) 0:00:01.932 *** 2025-09-03 00:55:20.030200 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:55:20.030212 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:55:20.030223 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:55:20.030234 | orchestrator | 2025-09-03 00:55:20.030246 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-03 00:55:20.030257 | orchestrator | Wednesday 03 September 2025 00:53:10 +0000 (0:00:00.287) 0:00:02.219 *** 2025-09-03 00:55:20.030268 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:55:20.030279 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:55:20.030291 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:55:20.030302 | orchestrator | 2025-09-03 00:55:20.030313 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-03 00:55:20.030325 | orchestrator | Wednesday 03 September 2025 00:53:11 +0000 (0:00:00.798) 0:00:03.018 *** 2025-09-03 00:55:20.030336 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:55:20.030347 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:55:20.030358 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:55:20.030369 | orchestrator | 2025-09-03 00:55:20.030381 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-03 00:55:20.030392 | orchestrator | Wednesday 03 September 2025 00:53:11 +0000 (0:00:00.298) 0:00:03.316 *** 2025-09-03 00:55:20.030403 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:55:20.030414 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:55:20.030426 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:55:20.030437 | orchestrator | 2025-09-03 00:55:20.030448 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-03 00:55:20.030459 | orchestrator | Wednesday 03 September 2025 00:53:11 +0000 (0:00:00.283) 0:00:03.600 *** 2025-09-03 00:55:20.030470 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:55:20.030481 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:55:20.030517 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:55:20.030529 | orchestrator | 2025-09-03 00:55:20.030540 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-03 00:55:20.030551 | orchestrator | Wednesday 03 September 2025 00:53:12 +0000 (0:00:00.301) 0:00:03.902 *** 2025-09-03 00:55:20.030563 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.030574 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:55:20.030585 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:55:20.030597 | orchestrator | 2025-09-03 00:55:20.030608 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-03 00:55:20.030619 | orchestrator | Wednesday 03 September 2025 00:53:12 +0000 (0:00:00.462) 0:00:04.365 *** 2025-09-03 00:55:20.030630 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:55:20.030641 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:55:20.030652 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:55:20.030663 | orchestrator | 2025-09-03 00:55:20.030675 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-03 00:55:20.030686 | orchestrator | Wednesday 03 September 2025 00:53:13 +0000 (0:00:00.294) 0:00:04.660 *** 2025-09-03 00:55:20.030697 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-03 00:55:20.030707 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-03 00:55:20.030718 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-03 00:55:20.030729 | orchestrator | 2025-09-03 00:55:20.030740 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-03 00:55:20.030751 | orchestrator | Wednesday 03 September 2025 00:53:13 +0000 (0:00:00.680) 0:00:05.340 *** 2025-09-03 00:55:20.030762 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:55:20.030773 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:55:20.030784 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:55:20.030795 | orchestrator | 2025-09-03 00:55:20.030806 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-03 00:55:20.030817 | orchestrator | Wednesday 03 September 2025 00:53:14 +0000 (0:00:00.449) 0:00:05.790 *** 2025-09-03 00:55:20.030827 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-03 00:55:20.030838 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-03 00:55:20.030849 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-03 00:55:20.030860 | orchestrator | 2025-09-03 00:55:20.030871 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-03 00:55:20.030882 | orchestrator | Wednesday 03 September 2025 00:53:16 +0000 (0:00:02.135) 0:00:07.925 *** 2025-09-03 00:55:20.030893 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-03 00:55:20.030904 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-03 00:55:20.030915 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-03 00:55:20.030926 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.030937 | orchestrator | 2025-09-03 00:55:20.030970 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-03 00:55:20.031008 | orchestrator | Wednesday 03 September 2025 00:53:16 +0000 (0:00:00.382) 0:00:08.308 *** 2025-09-03 00:55:20.031040 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.031064 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.031084 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.031120 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.031139 | orchestrator | 2025-09-03 00:55:20.031152 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-03 00:55:20.031163 | orchestrator | Wednesday 03 September 2025 00:53:17 +0000 (0:00:00.750) 0:00:09.059 *** 2025-09-03 00:55:20.031176 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.031190 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.031201 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.031212 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.031223 | orchestrator | 2025-09-03 00:55:20.031235 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-03 00:55:20.031246 | orchestrator | Wednesday 03 September 2025 00:53:17 +0000 (0:00:00.148) 0:00:09.207 *** 2025-09-03 00:55:20.031259 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b042aed1399f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-03 00:53:14.777240', 'end': '2025-09-03 00:53:14.828211', 'delta': '0:00:00.050971', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b042aed1399f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-03 00:55:20.031273 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a6c51cef11cc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-03 00:53:15.570762', 'end': '2025-09-03 00:53:15.609681', 'delta': '0:00:00.038919', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a6c51cef11cc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-03 00:55:20.031301 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '187b242fe57c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-03 00:53:16.113352', 'end': '2025-09-03 00:53:16.157006', 'delta': '0:00:00.043654', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['187b242fe57c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-03 00:55:20.031320 | orchestrator | 2025-09-03 00:55:20.031332 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-03 00:55:20.031343 | orchestrator | Wednesday 03 September 2025 00:53:17 +0000 (0:00:00.345) 0:00:09.552 *** 2025-09-03 00:55:20.031354 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:55:20.031365 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:55:20.031376 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:55:20.031387 | orchestrator | 2025-09-03 00:55:20.031398 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-03 00:55:20.031409 | orchestrator | Wednesday 03 September 2025 00:53:18 +0000 (0:00:00.504) 0:00:10.057 *** 2025-09-03 00:55:20.031420 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-03 00:55:20.031431 | orchestrator | 2025-09-03 00:55:20.031442 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-03 00:55:20.031452 | orchestrator | Wednesday 03 September 2025 00:53:20 +0000 (0:00:01.670) 0:00:11.728 *** 2025-09-03 00:55:20.031463 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.031474 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:55:20.031485 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:55:20.031496 | orchestrator | 2025-09-03 00:55:20.031507 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-03 00:55:20.031518 | orchestrator | Wednesday 03 September 2025 00:53:20 +0000 (0:00:00.283) 0:00:12.012 *** 2025-09-03 00:55:20.031528 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.031539 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:55:20.031550 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:55:20.031561 | orchestrator | 2025-09-03 00:55:20.031572 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-03 00:55:20.031583 | orchestrator | Wednesday 03 September 2025 00:53:20 +0000 (0:00:00.397) 0:00:12.410 *** 2025-09-03 00:55:20.031594 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.031605 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:55:20.031616 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:55:20.031627 | orchestrator | 2025-09-03 00:55:20.031638 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-03 00:55:20.031648 | orchestrator | Wednesday 03 September 2025 00:53:21 +0000 (0:00:00.456) 0:00:12.866 *** 2025-09-03 00:55:20.031659 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:55:20.031670 | orchestrator | 2025-09-03 00:55:20.031681 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-03 00:55:20.031692 | orchestrator | Wednesday 03 September 2025 00:53:21 +0000 (0:00:00.132) 0:00:12.999 *** 2025-09-03 00:55:20.031703 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.031714 | orchestrator | 2025-09-03 00:55:20.031725 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-03 00:55:20.031736 | orchestrator | Wednesday 03 September 2025 00:53:21 +0000 (0:00:00.218) 0:00:13.217 *** 2025-09-03 00:55:20.031747 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.031758 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:55:20.031769 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:55:20.031780 | orchestrator | 2025-09-03 00:55:20.031791 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-03 00:55:20.031802 | orchestrator | Wednesday 03 September 2025 00:53:21 +0000 (0:00:00.276) 0:00:13.494 *** 2025-09-03 00:55:20.031812 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.031823 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:55:20.031834 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:55:20.031845 | orchestrator | 2025-09-03 00:55:20.031863 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-03 00:55:20.031874 | orchestrator | Wednesday 03 September 2025 00:53:22 +0000 (0:00:00.317) 0:00:13.812 *** 2025-09-03 00:55:20.031884 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.031895 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:55:20.031907 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:55:20.031917 | orchestrator | 2025-09-03 00:55:20.031928 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-03 00:55:20.031939 | orchestrator | Wednesday 03 September 2025 00:53:22 +0000 (0:00:00.540) 0:00:14.352 *** 2025-09-03 00:55:20.031975 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.031988 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:55:20.031999 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:55:20.032010 | orchestrator | 2025-09-03 00:55:20.032020 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-03 00:55:20.032031 | orchestrator | Wednesday 03 September 2025 00:53:23 +0000 (0:00:00.312) 0:00:14.664 *** 2025-09-03 00:55:20.032042 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.032053 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:55:20.032065 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:55:20.032084 | orchestrator | 2025-09-03 00:55:20.032103 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-03 00:55:20.032123 | orchestrator | Wednesday 03 September 2025 00:53:23 +0000 (0:00:00.321) 0:00:14.986 *** 2025-09-03 00:55:20.032144 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.032163 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:55:20.032182 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:55:20.032194 | orchestrator | 2025-09-03 00:55:20.032205 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-03 00:55:20.032248 | orchestrator | Wednesday 03 September 2025 00:53:23 +0000 (0:00:00.313) 0:00:15.299 *** 2025-09-03 00:55:20.032260 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.032277 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:55:20.032289 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:55:20.032300 | orchestrator | 2025-09-03 00:55:20.032311 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-03 00:55:20.032322 | orchestrator | Wednesday 03 September 2025 00:53:24 +0000 (0:00:00.461) 0:00:15.761 *** 2025-09-03 00:55:20.032334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d05881db--8953--52a0--98ec--dd1036bee846-osd--block--d05881db--8953--52a0--98ec--dd1036bee846', 'dm-uuid-LVM-vVB7WYB05SG5ksLYtniNiR4wu8glVMPWKhYtoiaiSt5OIqt1nPLfaqf1U7zjf7YR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.032348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2e5a0ee6--219f--5b14--b340--2bfd497a8fc5-osd--block--2e5a0ee6--219f--5b14--b340--2bfd497a8fc5', 'dm-uuid-LVM-1KmnNiiQVjzl7pN9nTYyE5njRkbNrYz4h9XU6mMO0bdkLOvKg9lVlzPT5w2fmM4x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.032360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.032381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.032393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.032404 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.032416 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.032434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.032450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.032462 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.032473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--400ae980--4c36--5b9b--960d--631158f9c2c9-osd--block--400ae980--4c36--5b9b--960d--631158f9c2c9', 'dm-uuid-LVM-IDjOLjbNgO5Gcv2cLb1cPZmsNftrK9fCNyLEUMtCihcv5KL0yIzEzjBRtXrX5eQW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.032489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77', 'scsi-SQEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part1', 'scsi-SQEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part14', 'scsi-SQEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part15', 'scsi-SQEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part16', 'scsi-SQEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:55:20.032517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1107a6cb--8e5a--5215--8b60--1d473d685075-osd--block--1107a6cb--8e5a--5215--8b60--1d473d685075', 'dm-uuid-LVM-oNSc4vHMRM98uwbAfYefcePJlvTU2Nwwkd7GNCBmrAmQOPr2gvTWdfLuYAQHjDSI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.032535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d05881db--8953--52a0--98ec--dd1036bee846-osd--block--d05881db--8953--52a0--98ec--dd1036bee846'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dNVKS1-h0I6-cKeQ-KM7E-yqkM-njrQ-MJtXNz', 'scsi-0QEMU_QEMU_HARDDISK_9ba28649-84e7-4d30-a12b-e93c6e95fbcd', 'scsi-SQEMU_QEMU_HARDDISK_9ba28649-84e7-4d30-a12b-e93c6e95fbcd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:55:20.032548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2e5a0ee6--219f--5b14--b340--2bfd497a8fc5-osd--block--2e5a0ee6--219f--5b14--b340--2bfd497a8fc5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CLcDvK-RJok-AP8L-Ull5-Xzqq-bCBS-35j80d', 'scsi-0QEMU_QEMU_HARDDISK_7512b390-1fa3-4840-9943-7c6482fdb145', 'scsi-SQEMU_QEMU_HARDDISK_7512b390-1fa3-4840-9943-7c6482fdb145'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:55:20.032566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.032578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e885087e-46ab-46e4-825b-bdcddcbfdff8', 'scsi-SQEMU_QEMU_HARDDISK_e885087e-46ab-46e4-825b-bdcddcbfdff8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:55:20.032591 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.032602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-03-00-02-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:55:20.032619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.032636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.032648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.032660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.032683 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.032695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.032706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.032731 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae', 'scsi-SQEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part1', 'scsi-SQEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part14', 'scsi-SQEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part15', 'scsi-SQEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part16', 'scsi-SQEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:55:20.032746 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--400ae980--4c36--5b9b--960d--631158f9c2c9-osd--block--400ae980--4c36--5b9b--960d--631158f9c2c9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Mv3N52-jcqW-f5oK-qm4Y-NwnR-LFlk-3Lul3G', 'scsi-0QEMU_QEMU_HARDDISK_f4ffaa61-7d7a-4b4d-ae66-bf9c1470deb3', 'scsi-SQEMU_QEMU_HARDDISK_f4ffaa61-7d7a-4b4d-ae66-bf9c1470deb3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:55:20.032758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1107a6cb--8e5a--5215--8b60--1d473d685075-osd--block--1107a6cb--8e5a--5215--8b60--1d473d685075'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-k9rpRE-oQfZ-x2kK-rb0E-T1A0-6v1a-SlK6kI', 'scsi-0QEMU_QEMU_HARDDISK_89937d38-622a-4519-a70d-71f9b6cc380e', 'scsi-SQEMU_QEMU_HARDDISK_89937d38-622a-4519-a70d-71f9b6cc380e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:55:20.032776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2aa4af3c-ac98-453f-b557-6d0c203c4201', 'scsi-SQEMU_QEMU_HARDDISK_2aa4af3c-ac98-453f-b557-6d0c203c4201'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:55:20.032788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e75c81d9--f6c1--538f--9534--cc9e3445127a-osd--block--e75c81d9--f6c1--538f--9534--cc9e3445127a', 'dm-uuid-LVM-uV6mu7VkeLpyFdoMIvc3kKIapvcN5sCpS5UiCwvVt0Ysgo8oPMe1pPugUZ86q7Qi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.032800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-03-00-02-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:55:20.033358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--634e15af--8858--53e6--9f62--917e12b08878-osd--block--634e15af--8858--53e6--9f62--917e12b08878', 'dm-uuid-LVM-7atDwx2fefji4hgurJcmdtXUoHrK2uhSrGDUFw19zEE3Dr1YqTd7rS8tCRJuyUGB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.033386 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:55:20.033406 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.033418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.033440 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.033452 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.033463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.033474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.033485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.033496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-03 00:55:20.033524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a', 'scsi-SQEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part1', 'scsi-SQEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part14', 'scsi-SQEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part15', 'scsi-SQEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part16', 'scsi-SQEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:55:20.033544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e75c81d9--f6c1--538f--9534--cc9e3445127a-osd--block--e75c81d9--f6c1--538f--9534--cc9e3445127a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6DRCRF-1ldo-e2BN-8keY-ovu4-LPee-swW9qe', 'scsi-0QEMU_QEMU_HARDDISK_409307c9-8e7f-483b-a404-5462fce46233', 'scsi-SQEMU_QEMU_HARDDISK_409307c9-8e7f-483b-a404-5462fce46233'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:55:20.033556 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--634e15af--8858--53e6--9f62--917e12b08878-osd--block--634e15af--8858--53e6--9f62--917e12b08878'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UTntes-z8qa-dWgQ-K8BI-IKj9-wLWC-XmEeXz', 'scsi-0QEMU_QEMU_HARDDISK_ce19fbd3-6a41-4577-8f91-9183654abf8c', 'scsi-SQEMU_QEMU_HARDDISK_ce19fbd3-6a41-4577-8f91-9183654abf8c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:55:20.033567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4852aea-51af-4111-8e77-3990a105da37', 'scsi-SQEMU_QEMU_HARDDISK_d4852aea-51af-4111-8e77-3990a105da37'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:55:20.033586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-03-00-02-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-03 00:55:20.033597 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:55:20.033608 | orchestrator | 2025-09-03 00:55:20.033618 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-03 00:55:20.033628 | orchestrator | Wednesday 03 September 2025 00:53:24 +0000 (0:00:00.512) 0:00:16.273 *** 2025-09-03 00:55:20.033639 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d05881db--8953--52a0--98ec--dd1036bee846-osd--block--d05881db--8953--52a0--98ec--dd1036bee846', 'dm-uuid-LVM-vVB7WYB05SG5ksLYtniNiR4wu8glVMPWKhYtoiaiSt5OIqt1nPLfaqf1U7zjf7YR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.033657 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2e5a0ee6--219f--5b14--b340--2bfd497a8fc5-osd--block--2e5a0ee6--219f--5b14--b340--2bfd497a8fc5', 'dm-uuid-LVM-1KmnNiiQVjzl7pN9nTYyE5njRkbNrYz4h9XU6mMO0bdkLOvKg9lVlzPT5w2fmM4x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.033667 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.033678 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.033688 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.033709 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.033727 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.033737 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.033748 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--400ae980--4c36--5b9b--960d--631158f9c2c9-osd--block--400ae980--4c36--5b9b--960d--631158f9c2c9', 'dm-uuid-LVM-IDjOLjbNgO5Gcv2cLb1cPZmsNftrK9fCNyLEUMtCihcv5KL0yIzEzjBRtXrX5eQW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.033758 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.033768 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.033788 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1107a6cb--8e5a--5215--8b60--1d473d685075-osd--block--1107a6cb--8e5a--5215--8b60--1d473d685075', 'dm-uuid-LVM-oNSc4vHMRM98uwbAfYefcePJlvTU2Nwwkd7GNCBmrAmQOPr2gvTWdfLuYAQHjDSI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.033806 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77', 'scsi-SQEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part1', 'scsi-SQEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part14', 'scsi-SQEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part15', 'scsi-SQEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part16', 'scsi-SQEMU_QEMU_HARDDISK_fc54dbc2-85fa-4a7d-8bd9-52ff930caf77-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.033818 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.033829 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d05881db--8953--52a0--98ec--dd1036bee846-osd--block--d05881db--8953--52a0--98ec--dd1036bee846'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dNVKS1-h0I6-cKeQ-KM7E-yqkM-njrQ-MJtXNz', 'scsi-0QEMU_QEMU_HARDDISK_9ba28649-84e7-4d30-a12b-e93c6e95fbcd', 'scsi-SQEMU_QEMU_HARDDISK_9ba28649-84e7-4d30-a12b-e93c6e95fbcd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.033850 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.033867 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.033877 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2e5a0ee6--219f--5b14--b340--2bfd497a8fc5-osd--block--2e5a0ee6--219f--5b14--b340--2bfd497a8fc5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CLcDvK-RJok-AP8L-Ull5-Xzqq-bCBS-35j80d', 'scsi-0QEMU_QEMU_HARDDISK_7512b390-1fa3-4840-9943-7c6482fdb145', 'scsi-SQEMU_QEMU_HARDDISK_7512b390-1fa3-4840-9943-7c6482fdb145'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.033888 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.033898 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e885087e-46ab-46e4-825b-bdcddcbfdff8', 'scsi-SQEMU_QEMU_HARDDISK_e885087e-46ab-46e4-825b-bdcddcbfdff8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.033912 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.033937 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-03-00-02-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.033974 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.033988 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.034000 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.034012 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e75c81d9--f6c1--538f--9534--cc9e3445127a-osd--block--e75c81d9--f6c1--538f--9534--cc9e3445127a', 'dm-uuid-LVM-uV6mu7VkeLpyFdoMIvc3kKIapvcN5sCpS5UiCwvVt0Ysgo8oPMe1pPugUZ86q7Qi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.034055 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.034079 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--634e15af--8858--53e6--9f62--917e12b08878-osd--block--634e15af--8858--53e6--9f62--917e12b08878', 'dm-uuid-LVM-7atDwx2fefji4hgurJcmdtXUoHrK2uhSrGDUFw19zEE3Dr1YqTd7rS8tCRJuyUGB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.034099 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae', 'scsi-SQEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part1', 'scsi-SQEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part14', 'scsi-SQEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part15', 'scsi-SQEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part16', 'scsi-SQEMU_QEMU_HARDDISK_c87564b4-441b-42f3-97de-587e6061c3ae-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.034114 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.034126 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--400ae980--4c36--5b9b--960d--631158f9c2c9-osd--block--400ae980--4c36--5b9b--960d--631158f9c2c9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Mv3N52-jcqW-f5oK-qm4Y-NwnR-LFlk-3Lul3G', 'scsi-0QEMU_QEMU_HARDDISK_f4ffaa61-7d7a-4b4d-ae66-bf9c1470deb3', 'scsi-SQEMU_QEMU_HARDDISK_f4ffaa61-7d7a-4b4d-ae66-bf9c1470deb3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.034160 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.034180 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1107a6cb--8e5a--5215--8b60--1d473d685075-osd--block--1107a6cb--8e5a--5215--8b60--1d473d685075'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-k9rpRE-oQfZ-x2kK-rb0E-T1A0-6v1a-SlK6kI', 'scsi-0QEMU_QEMU_HARDDISK_89937d38-622a-4519-a70d-71f9b6cc380e', 'scsi-SQEMU_QEMU_HARDDISK_89937d38-622a-4519-a70d-71f9b6cc380e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.034198 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.034218 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2aa4af3c-ac98-453f-b557-6d0c203c4201', 'scsi-SQEMU_QEMU_HARDDISK_2aa4af3c-ac98-453f-b557-6d0c203c4201'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.034236 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.034267 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-03-00-02-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.034280 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.034292 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:55:20.034304 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.034314 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.034325 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.034347 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a', 'scsi-SQEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part1', 'scsi-SQEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part14', 'scsi-SQEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part15', 'scsi-SQEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part16', 'scsi-SQEMU_QEMU_HARDDISK_e8670175-54bd-41b5-bd3c-dd9ea44e7b4a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.034365 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e75c81d9--f6c1--538f--9534--cc9e3445127a-osd--block--e75c81d9--f6c1--538f--9534--cc9e3445127a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6DRCRF-1ldo-e2BN-8keY-ovu4-LPee-swW9qe', 'scsi-0QEMU_QEMU_HARDDISK_409307c9-8e7f-483b-a404-5462fce46233', 'scsi-SQEMU_QEMU_HARDDISK_409307c9-8e7f-483b-a404-5462fce46233'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.034376 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--634e15af--8858--53e6--9f62--917e12b08878-osd--block--634e15af--8858--53e6--9f62--917e12b08878'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UTntes-z8qa-dWgQ-K8BI-IKj9-wLWC-XmEeXz', 'scsi-0QEMU_QEMU_HARDDISK_ce19fbd3-6a41-4577-8f91-9183654abf8c', 'scsi-SQEMU_QEMU_HARDDISK_ce19fbd3-6a41-4577-8f91-9183654abf8c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.034387 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4852aea-51af-4111-8e77-3990a105da37', 'scsi-SQEMU_QEMU_HARDDISK_d4852aea-51af-4111-8e77-3990a105da37'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.034414 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-03-00-02-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-03 00:55:20.034425 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:55:20.034435 | orchestrator | 2025-09-03 00:55:20.034445 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-03 00:55:20.034455 | orchestrator | Wednesday 03 September 2025 00:53:25 +0000 (0:00:00.524) 0:00:16.797 *** 2025-09-03 00:55:20.034465 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:55:20.034475 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:55:20.034485 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:55:20.034495 | orchestrator | 2025-09-03 00:55:20.034505 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-03 00:55:20.034514 | orchestrator | Wednesday 03 September 2025 00:53:25 +0000 (0:00:00.657) 0:00:17.455 *** 2025-09-03 00:55:20.034524 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:55:20.034534 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:55:20.034544 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:55:20.034553 | orchestrator | 2025-09-03 00:55:20.034563 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-03 00:55:20.034573 | orchestrator | Wednesday 03 September 2025 00:53:26 +0000 (0:00:00.458) 0:00:17.914 *** 2025-09-03 00:55:20.034583 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:55:20.034593 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:55:20.034602 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:55:20.034612 | orchestrator | 2025-09-03 00:55:20.034622 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-03 00:55:20.034632 | orchestrator | Wednesday 03 September 2025 00:53:26 +0000 (0:00:00.628) 0:00:18.542 *** 2025-09-03 00:55:20.034642 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.034652 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:55:20.034662 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:55:20.034672 | orchestrator | 2025-09-03 00:55:20.034682 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-03 00:55:20.034692 | orchestrator | Wednesday 03 September 2025 00:53:27 +0000 (0:00:00.274) 0:00:18.817 *** 2025-09-03 00:55:20.034701 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.034711 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:55:20.034721 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:55:20.034731 | orchestrator | 2025-09-03 00:55:20.034741 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-03 00:55:20.034751 | orchestrator | Wednesday 03 September 2025 00:53:27 +0000 (0:00:00.390) 0:00:19.207 *** 2025-09-03 00:55:20.034761 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.034771 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:55:20.034780 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:55:20.034790 | orchestrator | 2025-09-03 00:55:20.034800 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-03 00:55:20.034810 | orchestrator | Wednesday 03 September 2025 00:53:28 +0000 (0:00:00.463) 0:00:19.671 *** 2025-09-03 00:55:20.034825 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-03 00:55:20.034835 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-03 00:55:20.034845 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-03 00:55:20.034855 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-03 00:55:20.034865 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-03 00:55:20.034875 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-03 00:55:20.034885 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-03 00:55:20.034894 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-03 00:55:20.034904 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-03 00:55:20.034914 | orchestrator | 2025-09-03 00:55:20.034924 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-03 00:55:20.034933 | orchestrator | Wednesday 03 September 2025 00:53:28 +0000 (0:00:00.816) 0:00:20.487 *** 2025-09-03 00:55:20.034943 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-03 00:55:20.034982 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-03 00:55:20.034992 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-03 00:55:20.035002 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.035012 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-03 00:55:20.035021 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-03 00:55:20.035031 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-03 00:55:20.035040 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:55:20.035050 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-03 00:55:20.035060 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-03 00:55:20.035069 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-03 00:55:20.035079 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:55:20.035088 | orchestrator | 2025-09-03 00:55:20.035098 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-03 00:55:20.035108 | orchestrator | Wednesday 03 September 2025 00:53:29 +0000 (0:00:00.363) 0:00:20.850 *** 2025-09-03 00:55:20.035117 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:55:20.035127 | orchestrator | 2025-09-03 00:55:20.035137 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-03 00:55:20.035147 | orchestrator | Wednesday 03 September 2025 00:53:29 +0000 (0:00:00.686) 0:00:21.537 *** 2025-09-03 00:55:20.035157 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.035167 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:55:20.035177 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:55:20.035187 | orchestrator | 2025-09-03 00:55:20.035206 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-03 00:55:20.035232 | orchestrator | Wednesday 03 September 2025 00:53:30 +0000 (0:00:00.333) 0:00:21.870 *** 2025-09-03 00:55:20.035250 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.035268 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:55:20.035285 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:55:20.035297 | orchestrator | 2025-09-03 00:55:20.035307 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-03 00:55:20.035316 | orchestrator | Wednesday 03 September 2025 00:53:30 +0000 (0:00:00.288) 0:00:22.159 *** 2025-09-03 00:55:20.035326 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.035336 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:55:20.035346 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:55:20.035355 | orchestrator | 2025-09-03 00:55:20.035365 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-03 00:55:20.035374 | orchestrator | Wednesday 03 September 2025 00:53:30 +0000 (0:00:00.298) 0:00:22.457 *** 2025-09-03 00:55:20.035393 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:55:20.035403 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:55:20.035413 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:55:20.035422 | orchestrator | 2025-09-03 00:55:20.035432 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-03 00:55:20.035442 | orchestrator | Wednesday 03 September 2025 00:53:31 +0000 (0:00:00.636) 0:00:23.093 *** 2025-09-03 00:55:20.035451 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-03 00:55:20.035461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-03 00:55:20.035471 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-03 00:55:20.035480 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.035490 | orchestrator | 2025-09-03 00:55:20.035500 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-03 00:55:20.035509 | orchestrator | Wednesday 03 September 2025 00:53:31 +0000 (0:00:00.357) 0:00:23.451 *** 2025-09-03 00:55:20.035519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-03 00:55:20.035528 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-03 00:55:20.035538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-03 00:55:20.035547 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.035557 | orchestrator | 2025-09-03 00:55:20.035567 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-03 00:55:20.035577 | orchestrator | Wednesday 03 September 2025 00:53:32 +0000 (0:00:00.373) 0:00:23.824 *** 2025-09-03 00:55:20.035586 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-03 00:55:20.035596 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-03 00:55:20.035606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-03 00:55:20.035615 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.035625 | orchestrator | 2025-09-03 00:55:20.035635 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-03 00:55:20.035644 | orchestrator | Wednesday 03 September 2025 00:53:32 +0000 (0:00:00.374) 0:00:24.199 *** 2025-09-03 00:55:20.035654 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:55:20.035664 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:55:20.035673 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:55:20.035683 | orchestrator | 2025-09-03 00:55:20.035693 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-03 00:55:20.035702 | orchestrator | Wednesday 03 September 2025 00:53:32 +0000 (0:00:00.284) 0:00:24.484 *** 2025-09-03 00:55:20.035712 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-03 00:55:20.035722 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-03 00:55:20.035731 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-03 00:55:20.035741 | orchestrator | 2025-09-03 00:55:20.035751 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-03 00:55:20.035760 | orchestrator | Wednesday 03 September 2025 00:53:33 +0000 (0:00:00.497) 0:00:24.981 *** 2025-09-03 00:55:20.035770 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-03 00:55:20.035780 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-03 00:55:20.035789 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-03 00:55:20.035799 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-03 00:55:20.035808 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-03 00:55:20.035818 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-03 00:55:20.035827 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-03 00:55:20.035837 | orchestrator | 2025-09-03 00:55:20.035847 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-03 00:55:20.035866 | orchestrator | Wednesday 03 September 2025 00:53:34 +0000 (0:00:01.031) 0:00:26.013 *** 2025-09-03 00:55:20.035875 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-03 00:55:20.035885 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-03 00:55:20.035894 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-03 00:55:20.035904 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-03 00:55:20.035914 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-03 00:55:20.035924 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-03 00:55:20.035933 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-03 00:55:20.035980 | orchestrator | 2025-09-03 00:55:20.035996 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-03 00:55:20.036006 | orchestrator | Wednesday 03 September 2025 00:53:36 +0000 (0:00:02.045) 0:00:28.058 *** 2025-09-03 00:55:20.036022 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:55:20.036032 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:55:20.036042 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-03 00:55:20.036052 | orchestrator | 2025-09-03 00:55:20.036061 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-03 00:55:20.036071 | orchestrator | Wednesday 03 September 2025 00:53:36 +0000 (0:00:00.366) 0:00:28.425 *** 2025-09-03 00:55:20.036082 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-03 00:55:20.036093 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-03 00:55:20.036103 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-03 00:55:20.036114 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-03 00:55:20.036124 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-03 00:55:20.036133 | orchestrator | 2025-09-03 00:55:20.036143 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-03 00:55:20.036153 | orchestrator | Wednesday 03 September 2025 00:54:23 +0000 (0:00:46.310) 0:01:14.736 *** 2025-09-03 00:55:20.036163 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:55:20.036173 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:55:20.036182 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:55:20.036192 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:55:20.036202 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:55:20.036217 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:55:20.036232 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-03 00:55:20.036249 | orchestrator | 2025-09-03 00:55:20.036266 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-03 00:55:20.036283 | orchestrator | Wednesday 03 September 2025 00:54:47 +0000 (0:00:24.288) 0:01:39.024 *** 2025-09-03 00:55:20.036301 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:55:20.036317 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:55:20.036331 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:55:20.036341 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:55:20.036350 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:55:20.036360 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:55:20.036370 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-03 00:55:20.036379 | orchestrator | 2025-09-03 00:55:20.036389 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-03 00:55:20.036398 | orchestrator | Wednesday 03 September 2025 00:54:59 +0000 (0:00:12.115) 0:01:51.140 *** 2025-09-03 00:55:20.036408 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:55:20.036417 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-03 00:55:20.036427 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-03 00:55:20.036437 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:55:20.036446 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-03 00:55:20.036456 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-03 00:55:20.036471 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:55:20.036481 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-03 00:55:20.036491 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-03 00:55:20.036501 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:55:20.036511 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-03 00:55:20.036521 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-03 00:55:20.036530 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:55:20.036540 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-03 00:55:20.036549 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-03 00:55:20.036559 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-03 00:55:20.036569 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-03 00:55:20.036578 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-03 00:55:20.036588 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-03 00:55:20.036597 | orchestrator | 2025-09-03 00:55:20.036607 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:55:20.036693 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-03 00:55:20.036714 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-03 00:55:20.036732 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-03 00:55:20.036742 | orchestrator | 2025-09-03 00:55:20.036752 | orchestrator | 2025-09-03 00:55:20.036762 | orchestrator | 2025-09-03 00:55:20.036772 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:55:20.036782 | orchestrator | Wednesday 03 September 2025 00:55:17 +0000 (0:00:18.295) 0:02:09.435 *** 2025-09-03 00:55:20.036791 | orchestrator | =============================================================================== 2025-09-03 00:55:20.036801 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.31s 2025-09-03 00:55:20.036811 | orchestrator | generate keys ---------------------------------------------------------- 24.29s 2025-09-03 00:55:20.036821 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.30s 2025-09-03 00:55:20.036830 | orchestrator | get keys from monitors ------------------------------------------------- 12.12s 2025-09-03 00:55:20.036840 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.14s 2025-09-03 00:55:20.036850 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.05s 2025-09-03 00:55:20.036859 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.67s 2025-09-03 00:55:20.036869 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.03s 2025-09-03 00:55:20.036879 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.82s 2025-09-03 00:55:20.036888 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.80s 2025-09-03 00:55:20.036898 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.75s 2025-09-03 00:55:20.036908 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.71s 2025-09-03 00:55:20.036917 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.69s 2025-09-03 00:55:20.036927 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.68s 2025-09-03 00:55:20.036937 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.66s 2025-09-03 00:55:20.037003 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.64s 2025-09-03 00:55:20.037017 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.64s 2025-09-03 00:55:20.037026 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.63s 2025-09-03 00:55:20.037036 | orchestrator | ceph-facts : Set_fact build devices from resolved symlinks -------------- 0.54s 2025-09-03 00:55:20.037046 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.52s 2025-09-03 00:55:20.037056 | orchestrator | 2025-09-03 00:55:20 | INFO  | Task c6a51a1c-212f-4fb9-b181-c295812ebbf1 is in state STARTED 2025-09-03 00:55:20.037066 | orchestrator | 2025-09-03 00:55:20 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:55:20.037076 | orchestrator | 2025-09-03 00:55:20 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:55:20.037086 | orchestrator | 2025-09-03 00:55:20 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:55:23.081231 | orchestrator | 2025-09-03 00:55:23 | INFO  | Task c6a51a1c-212f-4fb9-b181-c295812ebbf1 is in state STARTED 2025-09-03 00:55:23.083444 | orchestrator | 2025-09-03 00:55:23 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:55:23.085321 | orchestrator | 2025-09-03 00:55:23 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:55:23.085346 | orchestrator | 2025-09-03 00:55:23 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:55:26.125591 | orchestrator | 2025-09-03 00:55:26 | INFO  | Task c6a51a1c-212f-4fb9-b181-c295812ebbf1 is in state STARTED 2025-09-03 00:55:26.126668 | orchestrator | 2025-09-03 00:55:26 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:55:26.128587 | orchestrator | 2025-09-03 00:55:26 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:55:26.128610 | orchestrator | 2025-09-03 00:55:26 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:55:29.168180 | orchestrator | 2025-09-03 00:55:29 | INFO  | Task c6a51a1c-212f-4fb9-b181-c295812ebbf1 is in state STARTED 2025-09-03 00:55:29.170670 | orchestrator | 2025-09-03 00:55:29 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:55:29.173109 | orchestrator | 2025-09-03 00:55:29 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:55:29.174203 | orchestrator | 2025-09-03 00:55:29 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:55:32.223115 | orchestrator | 2025-09-03 00:55:32 | INFO  | Task c6a51a1c-212f-4fb9-b181-c295812ebbf1 is in state STARTED 2025-09-03 00:55:32.226536 | orchestrator | 2025-09-03 00:55:32 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:55:32.228577 | orchestrator | 2025-09-03 00:55:32 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:55:32.228855 | orchestrator | 2025-09-03 00:55:32 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:55:35.283688 | orchestrator | 2025-09-03 00:55:35 | INFO  | Task c6a51a1c-212f-4fb9-b181-c295812ebbf1 is in state STARTED 2025-09-03 00:55:35.284487 | orchestrator | 2025-09-03 00:55:35 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:55:35.287512 | orchestrator | 2025-09-03 00:55:35 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:55:35.288251 | orchestrator | 2025-09-03 00:55:35 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:55:38.339815 | orchestrator | 2025-09-03 00:55:38 | INFO  | Task c6a51a1c-212f-4fb9-b181-c295812ebbf1 is in state STARTED 2025-09-03 00:55:38.341358 | orchestrator | 2025-09-03 00:55:38 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:55:38.343158 | orchestrator | 2025-09-03 00:55:38 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:55:38.343763 | orchestrator | 2025-09-03 00:55:38 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:55:41.387864 | orchestrator | 2025-09-03 00:55:41 | INFO  | Task c6a51a1c-212f-4fb9-b181-c295812ebbf1 is in state STARTED 2025-09-03 00:55:41.390144 | orchestrator | 2025-09-03 00:55:41 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:55:41.392099 | orchestrator | 2025-09-03 00:55:41 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:55:41.392524 | orchestrator | 2025-09-03 00:55:41 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:55:44.438592 | orchestrator | 2025-09-03 00:55:44 | INFO  | Task c6a51a1c-212f-4fb9-b181-c295812ebbf1 is in state STARTED 2025-09-03 00:55:44.440140 | orchestrator | 2025-09-03 00:55:44 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:55:44.442440 | orchestrator | 2025-09-03 00:55:44 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:55:44.442470 | orchestrator | 2025-09-03 00:55:44 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:55:47.494311 | orchestrator | 2025-09-03 00:55:47 | INFO  | Task c6a51a1c-212f-4fb9-b181-c295812ebbf1 is in state STARTED 2025-09-03 00:55:47.495651 | orchestrator | 2025-09-03 00:55:47 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:55:47.497871 | orchestrator | 2025-09-03 00:55:47 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:55:47.498330 | orchestrator | 2025-09-03 00:55:47 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:55:50.545692 | orchestrator | 2025-09-03 00:55:50 | INFO  | Task c6a51a1c-212f-4fb9-b181-c295812ebbf1 is in state SUCCESS 2025-09-03 00:55:50.549242 | orchestrator | 2025-09-03 00:55:50 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:55:50.552912 | orchestrator | 2025-09-03 00:55:50 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:55:50.555237 | orchestrator | 2025-09-03 00:55:50 | INFO  | Task 03b56fc5-4c70-4502-96da-a8d3df293d6b is in state STARTED 2025-09-03 00:55:50.555609 | orchestrator | 2025-09-03 00:55:50 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:55:53.622124 | orchestrator | 2025-09-03 00:55:53 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:55:53.624313 | orchestrator | 2025-09-03 00:55:53 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:55:53.626832 | orchestrator | 2025-09-03 00:55:53 | INFO  | Task 03b56fc5-4c70-4502-96da-a8d3df293d6b is in state STARTED 2025-09-03 00:55:53.626909 | orchestrator | 2025-09-03 00:55:53 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:55:56.676590 | orchestrator | 2025-09-03 00:55:56 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:55:56.677099 | orchestrator | 2025-09-03 00:55:56 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:55:56.678782 | orchestrator | 2025-09-03 00:55:56 | INFO  | Task 03b56fc5-4c70-4502-96da-a8d3df293d6b is in state STARTED 2025-09-03 00:55:56.679029 | orchestrator | 2025-09-03 00:55:56 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:55:59.719998 | orchestrator | 2025-09-03 00:55:59 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:55:59.722661 | orchestrator | 2025-09-03 00:55:59 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state STARTED 2025-09-03 00:55:59.723379 | orchestrator | 2025-09-03 00:55:59 | INFO  | Task 03b56fc5-4c70-4502-96da-a8d3df293d6b is in state STARTED 2025-09-03 00:55:59.723412 | orchestrator | 2025-09-03 00:55:59 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:56:02.767691 | orchestrator | 2025-09-03 00:56:02 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:56:02.770433 | orchestrator | 2025-09-03 00:56:02 | INFO  | Task 1c08e4a0-f2e8-406d-969e-ff3637583606 is in state SUCCESS 2025-09-03 00:56:02.772496 | orchestrator | 2025-09-03 00:56:02.772537 | orchestrator | 2025-09-03 00:56:02.772550 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-03 00:56:02.772563 | orchestrator | 2025-09-03 00:56:02.772574 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-03 00:56:02.772586 | orchestrator | Wednesday 03 September 2025 00:55:22 +0000 (0:00:00.178) 0:00:00.178 *** 2025-09-03 00:56:02.772598 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-03 00:56:02.772610 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-03 00:56:02.772621 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-03 00:56:02.772633 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-03 00:56:02.772644 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-03 00:56:02.772682 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-03 00:56:02.772693 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-03 00:56:02.772704 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-03 00:56:02.772715 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-03 00:56:02.772726 | orchestrator | 2025-09-03 00:56:02.772738 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-03 00:56:02.772749 | orchestrator | Wednesday 03 September 2025 00:55:26 +0000 (0:00:04.192) 0:00:04.370 *** 2025-09-03 00:56:02.772760 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-03 00:56:02.772771 | orchestrator | 2025-09-03 00:56:02.772782 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-03 00:56:02.772793 | orchestrator | Wednesday 03 September 2025 00:55:27 +0000 (0:00:01.033) 0:00:05.404 *** 2025-09-03 00:56:02.772804 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-03 00:56:02.772816 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-03 00:56:02.772826 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-03 00:56:02.772837 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-03 00:56:02.772848 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-03 00:56:02.772860 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-03 00:56:02.772870 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-03 00:56:02.772896 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-03 00:56:02.772908 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-03 00:56:02.772919 | orchestrator | 2025-09-03 00:56:02.772929 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-03 00:56:02.772940 | orchestrator | Wednesday 03 September 2025 00:55:40 +0000 (0:00:13.065) 0:00:18.470 *** 2025-09-03 00:56:02.772974 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-03 00:56:02.772986 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-03 00:56:02.772997 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-03 00:56:02.773008 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-03 00:56:02.773019 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-03 00:56:02.773030 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-03 00:56:02.773040 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-03 00:56:02.773051 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-03 00:56:02.773062 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-03 00:56:02.773073 | orchestrator | 2025-09-03 00:56:02.773084 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:56:02.773095 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:56:02.773108 | orchestrator | 2025-09-03 00:56:02.773119 | orchestrator | 2025-09-03 00:56:02.773129 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:56:02.773140 | orchestrator | Wednesday 03 September 2025 00:55:47 +0000 (0:00:06.725) 0:00:25.195 *** 2025-09-03 00:56:02.773151 | orchestrator | =============================================================================== 2025-09-03 00:56:02.773171 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.07s 2025-09-03 00:56:02.773182 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.73s 2025-09-03 00:56:02.773193 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.19s 2025-09-03 00:56:02.773204 | orchestrator | Create share directory -------------------------------------------------- 1.03s 2025-09-03 00:56:02.773215 | orchestrator | 2025-09-03 00:56:02.773226 | orchestrator | 2025-09-03 00:56:02.773236 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 00:56:02.773248 | orchestrator | 2025-09-03 00:56:02.773270 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 00:56:02.773282 | orchestrator | Wednesday 03 September 2025 00:54:13 +0000 (0:00:00.212) 0:00:00.212 *** 2025-09-03 00:56:02.773294 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:56:02.773307 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:56:02.773318 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:56:02.773329 | orchestrator | 2025-09-03 00:56:02.773340 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 00:56:02.773351 | orchestrator | Wednesday 03 September 2025 00:54:13 +0000 (0:00:00.200) 0:00:00.413 *** 2025-09-03 00:56:02.773364 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-03 00:56:02.773375 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-03 00:56:02.773386 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-03 00:56:02.773397 | orchestrator | 2025-09-03 00:56:02.773408 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-03 00:56:02.773418 | orchestrator | 2025-09-03 00:56:02.773429 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-03 00:56:02.773440 | orchestrator | Wednesday 03 September 2025 00:54:14 +0000 (0:00:00.280) 0:00:00.694 *** 2025-09-03 00:56:02.773451 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:56:02.773462 | orchestrator | 2025-09-03 00:56:02.773472 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-03 00:56:02.773483 | orchestrator | Wednesday 03 September 2025 00:54:14 +0000 (0:00:00.431) 0:00:01.126 *** 2025-09-03 00:56:02.773507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-03 00:56:02.773542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-03 00:56:02.773562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-03 00:56:02.773582 | orchestrator | 2025-09-03 00:56:02.773594 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-03 00:56:02.773605 | orchestrator | Wednesday 03 September 2025 00:54:15 +0000 (0:00:00.901) 0:00:02.027 *** 2025-09-03 00:56:02.773616 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:56:02.773627 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:56:02.773638 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:56:02.773650 | orchestrator | 2025-09-03 00:56:02.773661 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-03 00:56:02.773671 | orchestrator | Wednesday 03 September 2025 00:54:15 +0000 (0:00:00.342) 0:00:02.369 *** 2025-09-03 00:56:02.773682 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-03 00:56:02.773693 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-03 00:56:02.773710 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-03 00:56:02.773721 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-03 00:56:02.773732 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-03 00:56:02.773743 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-03 00:56:02.773754 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-03 00:56:02.773765 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-03 00:56:02.773776 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-03 00:56:02.773786 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-03 00:56:02.773797 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-03 00:56:02.773808 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-03 00:56:02.773819 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-03 00:56:02.773830 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-03 00:56:02.773840 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-03 00:56:02.773851 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-03 00:56:02.773862 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-03 00:56:02.773873 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-03 00:56:02.773884 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-03 00:56:02.773895 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-03 00:56:02.773906 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-03 00:56:02.773917 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-03 00:56:02.773927 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-03 00:56:02.773961 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-03 00:56:02.773974 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-03 00:56:02.773986 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-03 00:56:02.774002 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-03 00:56:02.774014 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-03 00:56:02.774074 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-03 00:56:02.774086 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-03 00:56:02.774097 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-03 00:56:02.774108 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-03 00:56:02.774119 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-03 00:56:02.774130 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-03 00:56:02.774141 | orchestrator | 2025-09-03 00:56:02.774152 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-03 00:56:02.774163 | orchestrator | Wednesday 03 September 2025 00:54:16 +0000 (0:00:00.645) 0:00:03.015 *** 2025-09-03 00:56:02.774174 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:56:02.774185 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:56:02.774196 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:56:02.774207 | orchestrator | 2025-09-03 00:56:02.774218 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-03 00:56:02.774229 | orchestrator | Wednesday 03 September 2025 00:54:16 +0000 (0:00:00.244) 0:00:03.259 *** 2025-09-03 00:56:02.774240 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.774251 | orchestrator | 2025-09-03 00:56:02.774262 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-03 00:56:02.774280 | orchestrator | Wednesday 03 September 2025 00:54:16 +0000 (0:00:00.111) 0:00:03.371 *** 2025-09-03 00:56:02.774292 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.774303 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:02.774315 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:02.774326 | orchestrator | 2025-09-03 00:56:02.774337 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-03 00:56:02.774348 | orchestrator | Wednesday 03 September 2025 00:54:17 +0000 (0:00:00.361) 0:00:03.733 *** 2025-09-03 00:56:02.774359 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:56:02.774370 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:56:02.774382 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:56:02.774393 | orchestrator | 2025-09-03 00:56:02.774404 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-03 00:56:02.774415 | orchestrator | Wednesday 03 September 2025 00:54:17 +0000 (0:00:00.264) 0:00:03.997 *** 2025-09-03 00:56:02.774426 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.774437 | orchestrator | 2025-09-03 00:56:02.774448 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-03 00:56:02.774467 | orchestrator | Wednesday 03 September 2025 00:54:17 +0000 (0:00:00.113) 0:00:04.110 *** 2025-09-03 00:56:02.774478 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.774489 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:02.774501 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:02.774512 | orchestrator | 2025-09-03 00:56:02.774523 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-03 00:56:02.774534 | orchestrator | Wednesday 03 September 2025 00:54:17 +0000 (0:00:00.256) 0:00:04.367 *** 2025-09-03 00:56:02.774545 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:56:02.774556 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:56:02.774567 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:56:02.774578 | orchestrator | 2025-09-03 00:56:02.774589 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-03 00:56:02.774600 | orchestrator | Wednesday 03 September 2025 00:54:18 +0000 (0:00:00.269) 0:00:04.636 *** 2025-09-03 00:56:02.774611 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.774622 | orchestrator | 2025-09-03 00:56:02.774633 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-03 00:56:02.774644 | orchestrator | Wednesday 03 September 2025 00:54:18 +0000 (0:00:00.098) 0:00:04.734 *** 2025-09-03 00:56:02.774655 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.774667 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:02.774678 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:02.774689 | orchestrator | 2025-09-03 00:56:02.774700 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-03 00:56:02.774711 | orchestrator | Wednesday 03 September 2025 00:54:18 +0000 (0:00:00.379) 0:00:05.114 *** 2025-09-03 00:56:02.774722 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:56:02.774733 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:56:02.774744 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:56:02.774755 | orchestrator | 2025-09-03 00:56:02.774767 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-03 00:56:02.774778 | orchestrator | Wednesday 03 September 2025 00:54:18 +0000 (0:00:00.249) 0:00:05.363 *** 2025-09-03 00:56:02.774789 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.774799 | orchestrator | 2025-09-03 00:56:02.774811 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-03 00:56:02.774827 | orchestrator | Wednesday 03 September 2025 00:54:18 +0000 (0:00:00.110) 0:00:05.473 *** 2025-09-03 00:56:02.774838 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.774849 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:02.774860 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:02.774871 | orchestrator | 2025-09-03 00:56:02.774883 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-03 00:56:02.774894 | orchestrator | Wednesday 03 September 2025 00:54:19 +0000 (0:00:00.277) 0:00:05.751 *** 2025-09-03 00:56:02.774905 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:56:02.774916 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:56:02.774927 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:56:02.774938 | orchestrator | 2025-09-03 00:56:02.774967 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-03 00:56:02.774979 | orchestrator | Wednesday 03 September 2025 00:54:19 +0000 (0:00:00.256) 0:00:06.007 *** 2025-09-03 00:56:02.774989 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.775000 | orchestrator | 2025-09-03 00:56:02.775011 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-03 00:56:02.775022 | orchestrator | Wednesday 03 September 2025 00:54:19 +0000 (0:00:00.225) 0:00:06.233 *** 2025-09-03 00:56:02.775033 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.775044 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:02.775055 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:02.775066 | orchestrator | 2025-09-03 00:56:02.775078 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-03 00:56:02.775096 | orchestrator | Wednesday 03 September 2025 00:54:19 +0000 (0:00:00.298) 0:00:06.532 *** 2025-09-03 00:56:02.775107 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:56:02.775119 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:56:02.775130 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:56:02.775141 | orchestrator | 2025-09-03 00:56:02.775152 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-03 00:56:02.775164 | orchestrator | Wednesday 03 September 2025 00:54:20 +0000 (0:00:00.326) 0:00:06.859 *** 2025-09-03 00:56:02.775175 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.775185 | orchestrator | 2025-09-03 00:56:02.775196 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-03 00:56:02.775207 | orchestrator | Wednesday 03 September 2025 00:54:20 +0000 (0:00:00.112) 0:00:06.971 *** 2025-09-03 00:56:02.775218 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.775230 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:02.775241 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:02.775252 | orchestrator | 2025-09-03 00:56:02.775263 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-03 00:56:02.775274 | orchestrator | Wednesday 03 September 2025 00:54:20 +0000 (0:00:00.277) 0:00:07.249 *** 2025-09-03 00:56:02.775285 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:56:02.775296 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:56:02.775308 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:56:02.775319 | orchestrator | 2025-09-03 00:56:02.775336 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-03 00:56:02.775348 | orchestrator | Wednesday 03 September 2025 00:54:21 +0000 (0:00:00.500) 0:00:07.750 *** 2025-09-03 00:56:02.775359 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.775370 | orchestrator | 2025-09-03 00:56:02.775381 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-03 00:56:02.775392 | orchestrator | Wednesday 03 September 2025 00:54:21 +0000 (0:00:00.143) 0:00:07.893 *** 2025-09-03 00:56:02.775403 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.775414 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:02.775426 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:02.775437 | orchestrator | 2025-09-03 00:56:02.775448 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-03 00:56:02.775459 | orchestrator | Wednesday 03 September 2025 00:54:21 +0000 (0:00:00.300) 0:00:08.194 *** 2025-09-03 00:56:02.775470 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:56:02.775481 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:56:02.775492 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:56:02.775503 | orchestrator | 2025-09-03 00:56:02.775514 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-03 00:56:02.775525 | orchestrator | Wednesday 03 September 2025 00:54:22 +0000 (0:00:00.348) 0:00:08.542 *** 2025-09-03 00:56:02.775536 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.775547 | orchestrator | 2025-09-03 00:56:02.775557 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-03 00:56:02.775568 | orchestrator | Wednesday 03 September 2025 00:54:22 +0000 (0:00:00.130) 0:00:08.673 *** 2025-09-03 00:56:02.775579 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.775590 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:02.775602 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:02.775613 | orchestrator | 2025-09-03 00:56:02.775624 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-03 00:56:02.775635 | orchestrator | Wednesday 03 September 2025 00:54:22 +0000 (0:00:00.379) 0:00:09.052 *** 2025-09-03 00:56:02.775646 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:56:02.775657 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:56:02.775668 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:56:02.775679 | orchestrator | 2025-09-03 00:56:02.775690 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-03 00:56:02.775701 | orchestrator | Wednesday 03 September 2025 00:54:23 +0000 (0:00:00.511) 0:00:09.564 *** 2025-09-03 00:56:02.775718 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.775729 | orchestrator | 2025-09-03 00:56:02.775740 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-03 00:56:02.775751 | orchestrator | Wednesday 03 September 2025 00:54:23 +0000 (0:00:00.143) 0:00:09.708 *** 2025-09-03 00:56:02.775762 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.775773 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:02.775784 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:02.775796 | orchestrator | 2025-09-03 00:56:02.775807 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-03 00:56:02.775818 | orchestrator | Wednesday 03 September 2025 00:54:23 +0000 (0:00:00.320) 0:00:10.028 *** 2025-09-03 00:56:02.775829 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:56:02.775840 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:56:02.775851 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:56:02.775862 | orchestrator | 2025-09-03 00:56:02.775879 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-03 00:56:02.775890 | orchestrator | Wednesday 03 September 2025 00:54:23 +0000 (0:00:00.312) 0:00:10.341 *** 2025-09-03 00:56:02.775901 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.775912 | orchestrator | 2025-09-03 00:56:02.775924 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-03 00:56:02.775935 | orchestrator | Wednesday 03 September 2025 00:54:23 +0000 (0:00:00.128) 0:00:10.469 *** 2025-09-03 00:56:02.775974 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.775985 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:02.775997 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:02.776007 | orchestrator | 2025-09-03 00:56:02.776019 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-03 00:56:02.776029 | orchestrator | Wednesday 03 September 2025 00:54:24 +0000 (0:00:00.530) 0:00:11.000 *** 2025-09-03 00:56:02.776040 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:56:02.776051 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:56:02.776062 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:56:02.776073 | orchestrator | 2025-09-03 00:56:02.776084 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-03 00:56:02.776095 | orchestrator | Wednesday 03 September 2025 00:54:26 +0000 (0:00:01.662) 0:00:12.663 *** 2025-09-03 00:56:02.776106 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-03 00:56:02.776117 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-03 00:56:02.776127 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-03 00:56:02.776138 | orchestrator | 2025-09-03 00:56:02.776149 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-03 00:56:02.776160 | orchestrator | Wednesday 03 September 2025 00:54:27 +0000 (0:00:01.532) 0:00:14.195 *** 2025-09-03 00:56:02.776171 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-03 00:56:02.776182 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-03 00:56:02.776193 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-03 00:56:02.776204 | orchestrator | 2025-09-03 00:56:02.776215 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-03 00:56:02.776226 | orchestrator | Wednesday 03 September 2025 00:54:30 +0000 (0:00:02.404) 0:00:16.599 *** 2025-09-03 00:56:02.776243 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-03 00:56:02.776255 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-03 00:56:02.776266 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-03 00:56:02.776291 | orchestrator | 2025-09-03 00:56:02.776303 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-03 00:56:02.776314 | orchestrator | Wednesday 03 September 2025 00:54:32 +0000 (0:00:02.372) 0:00:18.972 *** 2025-09-03 00:56:02.776325 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.776336 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:02.776347 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:02.776358 | orchestrator | 2025-09-03 00:56:02.776369 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-03 00:56:02.776380 | orchestrator | Wednesday 03 September 2025 00:54:32 +0000 (0:00:00.296) 0:00:19.269 *** 2025-09-03 00:56:02.776391 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.776402 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:02.776413 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:02.776424 | orchestrator | 2025-09-03 00:56:02.776435 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-03 00:56:02.776447 | orchestrator | Wednesday 03 September 2025 00:54:33 +0000 (0:00:00.311) 0:00:19.580 *** 2025-09-03 00:56:02.776458 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:56:02.776469 | orchestrator | 2025-09-03 00:56:02.776480 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-03 00:56:02.776491 | orchestrator | Wednesday 03 September 2025 00:54:33 +0000 (0:00:00.591) 0:00:20.172 *** 2025-09-03 00:56:02.776509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-03 00:56:02.776532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-03 00:56:02.776559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-03 00:56:02.776577 | orchestrator | 2025-09-03 00:56:02.776589 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-03 00:56:02.776600 | orchestrator | Wednesday 03 September 2025 00:54:35 +0000 (0:00:01.936) 0:00:22.108 *** 2025-09-03 00:56:02.776621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-03 00:56:02.776634 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.776652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-03 00:56:02.776677 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:02.776690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-03 00:56:02.776703 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:02.776714 | orchestrator | 2025-09-03 00:56:02.776729 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-03 00:56:02.776741 | orchestrator | Wednesday 03 September 2025 00:54:36 +0000 (0:00:00.623) 0:00:22.731 *** 2025-09-03 00:56:02.776760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-03 00:56:02.776779 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.776796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-03 00:56:02.776808 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:02.776828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-03 00:56:02.776846 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:02.776858 | orchestrator | 2025-09-03 00:56:02.776869 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-03 00:56:02.776880 | orchestrator | Wednesday 03 September 2025 00:54:37 +0000 (0:00:00.859) 0:00:23.591 *** 2025-09-03 00:56:02.776897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-03 00:56:02.776925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-03 00:56:02.776964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-03 00:56:02.776985 | orchestrator | 2025-09-03 00:56:02.776996 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-03 00:56:02.777007 | orchestrator | Wednesday 03 September 2025 00:54:38 +0000 (0:00:01.483) 0:00:25.074 *** 2025-09-03 00:56:02.777018 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:02.777029 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:02.777040 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:02.777051 | orchestrator | 2025-09-03 00:56:02.777062 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-03 00:56:02.777073 | orchestrator | Wednesday 03 September 2025 00:54:38 +0000 (0:00:00.306) 0:00:25.381 *** 2025-09-03 00:56:02.777084 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:56:02.777095 | orchestrator | 2025-09-03 00:56:02.777106 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-03 00:56:02.777117 | orchestrator | Wednesday 03 September 2025 00:54:39 +0000 (0:00:00.539) 0:00:25.920 *** 2025-09-03 00:56:02.777128 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:56:02.777139 | orchestrator | 2025-09-03 00:56:02.777155 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-03 00:56:02.777167 | orchestrator | Wednesday 03 September 2025 00:54:41 +0000 (0:00:02.174) 0:00:28.095 *** 2025-09-03 00:56:02.777178 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:56:02.777189 | orchestrator | 2025-09-03 00:56:02.777200 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-03 00:56:02.777211 | orchestrator | Wednesday 03 September 2025 00:54:44 +0000 (0:00:02.624) 0:00:30.719 *** 2025-09-03 00:56:02.777222 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:56:02.777233 | orchestrator | 2025-09-03 00:56:02.777244 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-03 00:56:02.777255 | orchestrator | Wednesday 03 September 2025 00:54:58 +0000 (0:00:14.813) 0:00:45.533 *** 2025-09-03 00:56:02.777266 | orchestrator | 2025-09-03 00:56:02.777277 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-03 00:56:02.777288 | orchestrator | Wednesday 03 September 2025 00:54:59 +0000 (0:00:00.069) 0:00:45.602 *** 2025-09-03 00:56:02.777299 | orchestrator | 2025-09-03 00:56:02.777309 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-03 00:56:02.777320 | orchestrator | Wednesday 03 September 2025 00:54:59 +0000 (0:00:00.076) 0:00:45.679 *** 2025-09-03 00:56:02.777331 | orchestrator | 2025-09-03 00:56:02.777342 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-03 00:56:02.777353 | orchestrator | Wednesday 03 September 2025 00:54:59 +0000 (0:00:00.071) 0:00:45.750 *** 2025-09-03 00:56:02.777364 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:56:02.777375 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:56:02.777386 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:56:02.777397 | orchestrator | 2025-09-03 00:56:02.777408 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:56:02.777419 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-03 00:56:02.777431 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-03 00:56:02.777442 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-03 00:56:02.777453 | orchestrator | 2025-09-03 00:56:02.777464 | orchestrator | 2025-09-03 00:56:02.777484 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:56:02.777495 | orchestrator | Wednesday 03 September 2025 00:55:59 +0000 (0:01:00.208) 0:01:45.959 *** 2025-09-03 00:56:02.777507 | orchestrator | =============================================================================== 2025-09-03 00:56:02.777517 | orchestrator | horizon : Restart horizon container ------------------------------------ 60.21s 2025-09-03 00:56:02.777529 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.81s 2025-09-03 00:56:02.777540 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.62s 2025-09-03 00:56:02.777555 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.40s 2025-09-03 00:56:02.777566 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.37s 2025-09-03 00:56:02.777577 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.17s 2025-09-03 00:56:02.777588 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.94s 2025-09-03 00:56:02.777599 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.66s 2025-09-03 00:56:02.777610 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.53s 2025-09-03 00:56:02.777621 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.48s 2025-09-03 00:56:02.777632 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 0.90s 2025-09-03 00:56:02.777643 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.86s 2025-09-03 00:56:02.777654 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.65s 2025-09-03 00:56:02.777665 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.62s 2025-09-03 00:56:02.777675 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.59s 2025-09-03 00:56:02.777686 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2025-09-03 00:56:02.777697 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.53s 2025-09-03 00:56:02.777708 | orchestrator | horizon : Update policy file name --------------------------------------- 0.51s 2025-09-03 00:56:02.777719 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2025-09-03 00:56:02.777730 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.43s 2025-09-03 00:56:02.777741 | orchestrator | 2025-09-03 00:56:02 | INFO  | Task 03b56fc5-4c70-4502-96da-a8d3df293d6b is in state STARTED 2025-09-03 00:56:02.777752 | orchestrator | 2025-09-03 00:56:02 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:56:05.822887 | orchestrator | 2025-09-03 00:56:05 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:56:05.823035 | orchestrator | 2025-09-03 00:56:05 | INFO  | Task 03b56fc5-4c70-4502-96da-a8d3df293d6b is in state STARTED 2025-09-03 00:56:05.823050 | orchestrator | 2025-09-03 00:56:05 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:56:08.864595 | orchestrator | 2025-09-03 00:56:08 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:56:08.870279 | orchestrator | 2025-09-03 00:56:08 | INFO  | Task 03b56fc5-4c70-4502-96da-a8d3df293d6b is in state STARTED 2025-09-03 00:56:08.870314 | orchestrator | 2025-09-03 00:56:08 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:56:11.917332 | orchestrator | 2025-09-03 00:56:11 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:56:11.918608 | orchestrator | 2025-09-03 00:56:11 | INFO  | Task 03b56fc5-4c70-4502-96da-a8d3df293d6b is in state STARTED 2025-09-03 00:56:11.918640 | orchestrator | 2025-09-03 00:56:11 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:56:14.966830 | orchestrator | 2025-09-03 00:56:14 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:56:14.967834 | orchestrator | 2025-09-03 00:56:14 | INFO  | Task 03b56fc5-4c70-4502-96da-a8d3df293d6b is in state STARTED 2025-09-03 00:56:14.967866 | orchestrator | 2025-09-03 00:56:14 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:56:18.012611 | orchestrator | 2025-09-03 00:56:18 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:56:18.015325 | orchestrator | 2025-09-03 00:56:18 | INFO  | Task 03b56fc5-4c70-4502-96da-a8d3df293d6b is in state STARTED 2025-09-03 00:56:18.015377 | orchestrator | 2025-09-03 00:56:18 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:56:21.064034 | orchestrator | 2025-09-03 00:56:21 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:56:21.066382 | orchestrator | 2025-09-03 00:56:21 | INFO  | Task 03b56fc5-4c70-4502-96da-a8d3df293d6b is in state STARTED 2025-09-03 00:56:21.066415 | orchestrator | 2025-09-03 00:56:21 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:56:24.112455 | orchestrator | 2025-09-03 00:56:24 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:56:24.113140 | orchestrator | 2025-09-03 00:56:24 | INFO  | Task 03b56fc5-4c70-4502-96da-a8d3df293d6b is in state STARTED 2025-09-03 00:56:24.113744 | orchestrator | 2025-09-03 00:56:24 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:56:27.157024 | orchestrator | 2025-09-03 00:56:27 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:56:27.160095 | orchestrator | 2025-09-03 00:56:27 | INFO  | Task 03b56fc5-4c70-4502-96da-a8d3df293d6b is in state STARTED 2025-09-03 00:56:27.160127 | orchestrator | 2025-09-03 00:56:27 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:56:30.208617 | orchestrator | 2025-09-03 00:56:30 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:56:30.210375 | orchestrator | 2025-09-03 00:56:30 | INFO  | Task 03b56fc5-4c70-4502-96da-a8d3df293d6b is in state STARTED 2025-09-03 00:56:30.210407 | orchestrator | 2025-09-03 00:56:30 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:56:33.257298 | orchestrator | 2025-09-03 00:56:33 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:56:33.258821 | orchestrator | 2025-09-03 00:56:33 | INFO  | Task 03b56fc5-4c70-4502-96da-a8d3df293d6b is in state STARTED 2025-09-03 00:56:33.258841 | orchestrator | 2025-09-03 00:56:33 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:56:36.299732 | orchestrator | 2025-09-03 00:56:36 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:56:36.301309 | orchestrator | 2025-09-03 00:56:36 | INFO  | Task 03b56fc5-4c70-4502-96da-a8d3df293d6b is in state STARTED 2025-09-03 00:56:36.301339 | orchestrator | 2025-09-03 00:56:36 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:56:39.347670 | orchestrator | 2025-09-03 00:56:39 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:56:39.347779 | orchestrator | 2025-09-03 00:56:39 | INFO  | Task 03b56fc5-4c70-4502-96da-a8d3df293d6b is in state STARTED 2025-09-03 00:56:39.347796 | orchestrator | 2025-09-03 00:56:39 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:56:42.386131 | orchestrator | 2025-09-03 00:56:42 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:56:42.387150 | orchestrator | 2025-09-03 00:56:42 | INFO  | Task 03b56fc5-4c70-4502-96da-a8d3df293d6b is in state STARTED 2025-09-03 00:56:42.387258 | orchestrator | 2025-09-03 00:56:42 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:56:45.429391 | orchestrator | 2025-09-03 00:56:45 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:56:45.429500 | orchestrator | 2025-09-03 00:56:45 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:56:45.433100 | orchestrator | 2025-09-03 00:56:45 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:56:45.433985 | orchestrator | 2025-09-03 00:56:45 | INFO  | Task 0d55e511-b2f8-4e71-a795-f5a39ac01abd is in state STARTED 2025-09-03 00:56:45.437669 | orchestrator | 2025-09-03 00:56:45 | INFO  | Task 03b56fc5-4c70-4502-96da-a8d3df293d6b is in state SUCCESS 2025-09-03 00:56:45.437694 | orchestrator | 2025-09-03 00:56:45 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:56:48.483444 | orchestrator | 2025-09-03 00:56:48 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:56:48.483553 | orchestrator | 2025-09-03 00:56:48 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:56:48.483569 | orchestrator | 2025-09-03 00:56:48 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:56:48.483582 | orchestrator | 2025-09-03 00:56:48 | INFO  | Task 0d55e511-b2f8-4e71-a795-f5a39ac01abd is in state STARTED 2025-09-03 00:56:48.483594 | orchestrator | 2025-09-03 00:56:48 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:56:51.522398 | orchestrator | 2025-09-03 00:56:51 | INFO  | Task fe315f6a-6930-4183-8919-7507c45e2ce7 is in state STARTED 2025-09-03 00:56:51.523007 | orchestrator | 2025-09-03 00:56:51 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:56:51.523595 | orchestrator | 2025-09-03 00:56:51 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:56:51.526844 | orchestrator | 2025-09-03 00:56:51 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state STARTED 2025-09-03 00:56:51.527551 | orchestrator | 2025-09-03 00:56:51 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:56:51.531445 | orchestrator | 2025-09-03 00:56:51 | INFO  | Task 0d55e511-b2f8-4e71-a795-f5a39ac01abd is in state SUCCESS 2025-09-03 00:56:51.531489 | orchestrator | 2025-09-03 00:56:51 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:56:54.574320 | orchestrator | 2025-09-03 00:56:54 | INFO  | Task fe315f6a-6930-4183-8919-7507c45e2ce7 is in state STARTED 2025-09-03 00:56:54.575184 | orchestrator | 2025-09-03 00:56:54 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:56:54.575667 | orchestrator | 2025-09-03 00:56:54 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:56:54.577687 | orchestrator | 2025-09-03 00:56:54 | INFO  | Task 4d2348d5-93ad-4368-a272-6786b8c24e42 is in state SUCCESS 2025-09-03 00:56:54.579315 | orchestrator | 2025-09-03 00:56:54.579373 | orchestrator | 2025-09-03 00:56:54.579389 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-03 00:56:54.579402 | orchestrator | 2025-09-03 00:56:54.579414 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-03 00:56:54.579426 | orchestrator | Wednesday 03 September 2025 00:55:52 +0000 (0:00:00.247) 0:00:00.247 *** 2025-09-03 00:56:54.579437 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-03 00:56:54.579450 | orchestrator | 2025-09-03 00:56:54.579461 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-03 00:56:54.579472 | orchestrator | Wednesday 03 September 2025 00:55:52 +0000 (0:00:00.256) 0:00:00.503 *** 2025-09-03 00:56:54.579512 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-03 00:56:54.579524 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-03 00:56:54.579536 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-03 00:56:54.579571 | orchestrator | 2025-09-03 00:56:54.579583 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-03 00:56:54.579595 | orchestrator | Wednesday 03 September 2025 00:55:53 +0000 (0:00:01.234) 0:00:01.738 *** 2025-09-03 00:56:54.579606 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-03 00:56:54.579617 | orchestrator | 2025-09-03 00:56:54.579628 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-03 00:56:54.579639 | orchestrator | Wednesday 03 September 2025 00:55:54 +0000 (0:00:01.146) 0:00:02.885 *** 2025-09-03 00:56:54.579650 | orchestrator | changed: [testbed-manager] 2025-09-03 00:56:54.579663 | orchestrator | 2025-09-03 00:56:54.579675 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-03 00:56:54.579686 | orchestrator | Wednesday 03 September 2025 00:55:55 +0000 (0:00:01.024) 0:00:03.909 *** 2025-09-03 00:56:54.579697 | orchestrator | changed: [testbed-manager] 2025-09-03 00:56:54.579708 | orchestrator | 2025-09-03 00:56:54.579719 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-03 00:56:54.579730 | orchestrator | Wednesday 03 September 2025 00:55:56 +0000 (0:00:00.854) 0:00:04.763 *** 2025-09-03 00:56:54.579741 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-03 00:56:54.579752 | orchestrator | ok: [testbed-manager] 2025-09-03 00:56:54.579763 | orchestrator | 2025-09-03 00:56:54.579774 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-03 00:56:54.579785 | orchestrator | Wednesday 03 September 2025 00:56:33 +0000 (0:00:37.222) 0:00:41.986 *** 2025-09-03 00:56:54.579797 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-03 00:56:54.579808 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-03 00:56:54.579819 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-03 00:56:54.579831 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-03 00:56:54.579841 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-03 00:56:54.579853 | orchestrator | 2025-09-03 00:56:54.579864 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-03 00:56:54.579875 | orchestrator | Wednesday 03 September 2025 00:56:37 +0000 (0:00:03.991) 0:00:45.977 *** 2025-09-03 00:56:54.579887 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-03 00:56:54.579900 | orchestrator | 2025-09-03 00:56:54.579913 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-03 00:56:54.579926 | orchestrator | Wednesday 03 September 2025 00:56:38 +0000 (0:00:00.482) 0:00:46.460 *** 2025-09-03 00:56:54.579976 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:56:54.579991 | orchestrator | 2025-09-03 00:56:54.580004 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-03 00:56:54.580016 | orchestrator | Wednesday 03 September 2025 00:56:38 +0000 (0:00:00.132) 0:00:46.592 *** 2025-09-03 00:56:54.580029 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:56:54.580041 | orchestrator | 2025-09-03 00:56:54.580054 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-03 00:56:54.580067 | orchestrator | Wednesday 03 September 2025 00:56:38 +0000 (0:00:00.293) 0:00:46.886 *** 2025-09-03 00:56:54.580079 | orchestrator | changed: [testbed-manager] 2025-09-03 00:56:54.580093 | orchestrator | 2025-09-03 00:56:54.580106 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-03 00:56:54.580118 | orchestrator | Wednesday 03 September 2025 00:56:40 +0000 (0:00:01.920) 0:00:48.806 *** 2025-09-03 00:56:54.580130 | orchestrator | changed: [testbed-manager] 2025-09-03 00:56:54.580152 | orchestrator | 2025-09-03 00:56:54.580166 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-03 00:56:54.580178 | orchestrator | Wednesday 03 September 2025 00:56:41 +0000 (0:00:00.718) 0:00:49.524 *** 2025-09-03 00:56:54.580192 | orchestrator | changed: [testbed-manager] 2025-09-03 00:56:54.580204 | orchestrator | 2025-09-03 00:56:54.580217 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-03 00:56:54.580229 | orchestrator | Wednesday 03 September 2025 00:56:42 +0000 (0:00:00.622) 0:00:50.146 *** 2025-09-03 00:56:54.580242 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-03 00:56:54.580253 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-03 00:56:54.580264 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-03 00:56:54.580276 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-03 00:56:54.580295 | orchestrator | 2025-09-03 00:56:54.580328 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:56:54.580350 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-03 00:56:54.580370 | orchestrator | 2025-09-03 00:56:54.580388 | orchestrator | 2025-09-03 00:56:54.580452 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:56:54.580465 | orchestrator | Wednesday 03 September 2025 00:56:43 +0000 (0:00:01.525) 0:00:51.672 *** 2025-09-03 00:56:54.580476 | orchestrator | =============================================================================== 2025-09-03 00:56:54.580487 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 37.22s 2025-09-03 00:56:54.580497 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.99s 2025-09-03 00:56:54.580508 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.92s 2025-09-03 00:56:54.580519 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.53s 2025-09-03 00:56:54.580529 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.23s 2025-09-03 00:56:54.580540 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.15s 2025-09-03 00:56:54.580551 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.02s 2025-09-03 00:56:54.580561 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.85s 2025-09-03 00:56:54.580572 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.72s 2025-09-03 00:56:54.580583 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.62s 2025-09-03 00:56:54.580593 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.48s 2025-09-03 00:56:54.580604 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.29s 2025-09-03 00:56:54.580615 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.26s 2025-09-03 00:56:54.580625 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-09-03 00:56:54.580636 | orchestrator | 2025-09-03 00:56:54.580647 | orchestrator | 2025-09-03 00:56:54.580657 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 00:56:54.580668 | orchestrator | 2025-09-03 00:56:54.580679 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 00:56:54.580689 | orchestrator | Wednesday 03 September 2025 00:56:47 +0000 (0:00:00.175) 0:00:00.175 *** 2025-09-03 00:56:54.580700 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:56:54.580711 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:56:54.580723 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:56:54.580734 | orchestrator | 2025-09-03 00:56:54.580745 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 00:56:54.580755 | orchestrator | Wednesday 03 September 2025 00:56:48 +0000 (0:00:00.306) 0:00:00.482 *** 2025-09-03 00:56:54.580766 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-03 00:56:54.580785 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-03 00:56:54.580797 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-03 00:56:54.580807 | orchestrator | 2025-09-03 00:56:54.580818 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-03 00:56:54.580829 | orchestrator | 2025-09-03 00:56:54.580840 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-03 00:56:54.580850 | orchestrator | Wednesday 03 September 2025 00:56:48 +0000 (0:00:00.766) 0:00:01.249 *** 2025-09-03 00:56:54.580861 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:56:54.580872 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:56:54.580883 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:56:54.580894 | orchestrator | 2025-09-03 00:56:54.580905 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:56:54.580916 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:56:54.580928 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:56:54.580961 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:56:54.580974 | orchestrator | 2025-09-03 00:56:54.580985 | orchestrator | 2025-09-03 00:56:54.580996 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:56:54.581007 | orchestrator | Wednesday 03 September 2025 00:56:49 +0000 (0:00:00.731) 0:00:01.981 *** 2025-09-03 00:56:54.581017 | orchestrator | =============================================================================== 2025-09-03 00:56:54.581028 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.77s 2025-09-03 00:56:54.581039 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.73s 2025-09-03 00:56:54.581049 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-09-03 00:56:54.581060 | orchestrator | 2025-09-03 00:56:54.581070 | orchestrator | 2025-09-03 00:56:54.581081 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 00:56:54.581092 | orchestrator | 2025-09-03 00:56:54.581102 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 00:56:54.581113 | orchestrator | Wednesday 03 September 2025 00:54:13 +0000 (0:00:00.250) 0:00:00.250 *** 2025-09-03 00:56:54.581124 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:56:54.581134 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:56:54.581145 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:56:54.581156 | orchestrator | 2025-09-03 00:56:54.581167 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 00:56:54.581177 | orchestrator | Wednesday 03 September 2025 00:54:13 +0000 (0:00:00.265) 0:00:00.516 *** 2025-09-03 00:56:54.581195 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-03 00:56:54.581206 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-03 00:56:54.581217 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-03 00:56:54.581228 | orchestrator | 2025-09-03 00:56:54.581238 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-03 00:56:54.581249 | orchestrator | 2025-09-03 00:56:54.581293 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-03 00:56:54.581306 | orchestrator | Wednesday 03 September 2025 00:54:14 +0000 (0:00:00.359) 0:00:00.875 *** 2025-09-03 00:56:54.581324 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:56:54.581343 | orchestrator | 2025-09-03 00:56:54.581361 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-03 00:56:54.581378 | orchestrator | Wednesday 03 September 2025 00:54:14 +0000 (0:00:00.523) 0:00:01.399 *** 2025-09-03 00:56:54.581401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-03 00:56:54.581441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-03 00:56:54.581466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-03 00:56:54.581486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-03 00:56:54.581539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-03 00:56:54.581561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-03 00:56:54.581573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-03 00:56:54.581585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-03 00:56:54.581597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-03 00:56:54.581608 | orchestrator | 2025-09-03 00:56:54.581620 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-03 00:56:54.581631 | orchestrator | Wednesday 03 September 2025 00:54:16 +0000 (0:00:01.625) 0:00:03.024 *** 2025-09-03 00:56:54.581642 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-03 00:56:54.581653 | orchestrator | 2025-09-03 00:56:54.581664 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-03 00:56:54.581675 | orchestrator | Wednesday 03 September 2025 00:54:17 +0000 (0:00:00.728) 0:00:03.753 *** 2025-09-03 00:56:54.581686 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:56:54.581697 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:56:54.581708 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:56:54.581719 | orchestrator | 2025-09-03 00:56:54.581729 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-03 00:56:54.581740 | orchestrator | Wednesday 03 September 2025 00:54:17 +0000 (0:00:00.420) 0:00:04.174 *** 2025-09-03 00:56:54.581751 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-03 00:56:54.581762 | orchestrator | 2025-09-03 00:56:54.581778 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-03 00:56:54.581796 | orchestrator | Wednesday 03 September 2025 00:54:18 +0000 (0:00:00.579) 0:00:04.753 *** 2025-09-03 00:56:54.581807 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:56:54.581819 | orchestrator | 2025-09-03 00:56:54.581835 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-03 00:56:54.581847 | orchestrator | Wednesday 03 September 2025 00:54:18 +0000 (0:00:00.512) 0:00:05.266 *** 2025-09-03 00:56:54.581859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-03 00:56:54.581872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-03 00:56:54.581885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-03 00:56:54.581897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-03 00:56:54.581931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-03 00:56:54.582061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-03 00:56:54.582077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-03 00:56:54.582088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-03 00:56:54.582099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-03 00:56:54.582109 | orchestrator | 2025-09-03 00:56:54.582119 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-03 00:56:54.582129 | orchestrator | Wednesday 03 September 2025 00:54:21 +0000 (0:00:03.066) 0:00:08.332 *** 2025-09-03 00:56:54.582146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-03 00:56:54.582179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-03 00:56:54.582190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-03 00:56:54.582201 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:54.582211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-03 00:56:54.582222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-03 00:56:54.582233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-03 00:56:54.582263 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:54.582285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-03 00:56:54.582297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-03 00:56:54.582307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-03 00:56:54.582317 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:54.582327 | orchestrator | 2025-09-03 00:56:54.582337 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-03 00:56:54.582347 | orchestrator | Wednesday 03 September 2025 00:54:22 +0000 (0:00:00.884) 0:00:09.217 *** 2025-09-03 00:56:54.582358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-03 00:56:54.582368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-03 00:56:54.582389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-03 00:56:54.582400 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:54.582428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-03 00:56:54.582447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-03 00:56:54.582467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-03 00:56:54.582480 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:54.582490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-03 00:56:54.582508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-03 00:56:54.582529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-03 00:56:54.582540 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:54.582550 | orchestrator | 2025-09-03 00:56:54.582560 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-03 00:56:54.582570 | orchestrator | Wednesday 03 September 2025 00:54:23 +0000 (0:00:00.833) 0:00:10.050 *** 2025-09-03 00:56:54.582581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-03 00:56:54.582592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-03 00:56:54.582609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-03 00:56:54.582629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-03 00:56:54.582640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-03 00:56:54.582650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-03 00:56:54.582660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-03 00:56:54.582670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-03 00:56:54.582686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-03 00:56:54.582696 | orchestrator | 2025-09-03 00:56:54.582706 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-03 00:56:54.582715 | orchestrator | Wednesday 03 September 2025 00:54:26 +0000 (0:00:03.281) 0:00:13.331 *** 2025-09-03 00:56:54.582736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-03 00:56:54.582748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-03 00:56:54.582758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-03 00:56:54.582769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-03 00:56:54.582785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-03 00:56:54.582800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-03 00:56:54.582816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-03 00:56:54.582826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-03 00:56:54.582837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-03 00:56:54.582847 | orchestrator | 2025-09-03 00:56:54.582856 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-03 00:56:54.582872 | orchestrator | Wednesday 03 September 2025 00:54:32 +0000 (0:00:05.537) 0:00:18.869 *** 2025-09-03 00:56:54.582882 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:56:54.582892 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:56:54.582902 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:56:54.582911 | orchestrator | 2025-09-03 00:56:54.582921 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-03 00:56:54.582931 | orchestrator | Wednesday 03 September 2025 00:54:33 +0000 (0:00:01.469) 0:00:20.338 *** 2025-09-03 00:56:54.582965 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:54.582977 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:54.582986 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:54.582996 | orchestrator | 2025-09-03 00:56:54.583006 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-03 00:56:54.583015 | orchestrator | Wednesday 03 September 2025 00:54:34 +0000 (0:00:00.544) 0:00:20.882 *** 2025-09-03 00:56:54.583025 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:54.583034 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:54.583044 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:54.583054 | orchestrator | 2025-09-03 00:56:54.583063 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-03 00:56:54.583073 | orchestrator | Wednesday 03 September 2025 00:54:34 +0000 (0:00:00.269) 0:00:21.152 *** 2025-09-03 00:56:54.583082 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:54.583092 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:54.583102 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:54.583111 | orchestrator | 2025-09-03 00:56:54.583121 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-03 00:56:54.583131 | orchestrator | Wednesday 03 September 2025 00:54:35 +0000 (0:00:00.461) 0:00:21.613 *** 2025-09-03 00:56:54.583146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-03 00:56:54.583163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-03 00:56:54.583174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-03 00:56:54.583195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-03 00:56:54.583206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-03 00:56:54.583216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-03 00:56:54.583238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-03 00:56:54.583249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-03 00:56:54.583265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-03 00:56:54.583275 | orchestrator | 2025-09-03 00:56:54.583285 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-03 00:56:54.583295 | orchestrator | Wednesday 03 September 2025 00:54:37 +0000 (0:00:02.375) 0:00:23.988 *** 2025-09-03 00:56:54.583305 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:54.583314 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:54.583324 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:54.583333 | orchestrator | 2025-09-03 00:56:54.583343 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-03 00:56:54.583353 | orchestrator | Wednesday 03 September 2025 00:54:37 +0000 (0:00:00.295) 0:00:24.284 *** 2025-09-03 00:56:54.583362 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-03 00:56:54.583372 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-03 00:56:54.583381 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-03 00:56:54.583391 | orchestrator | 2025-09-03 00:56:54.583401 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-03 00:56:54.583410 | orchestrator | Wednesday 03 September 2025 00:54:39 +0000 (0:00:01.465) 0:00:25.750 *** 2025-09-03 00:56:54.583420 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-03 00:56:54.583429 | orchestrator | 2025-09-03 00:56:54.583443 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-03 00:56:54.583459 | orchestrator | Wednesday 03 September 2025 00:54:40 +0000 (0:00:00.893) 0:00:26.643 *** 2025-09-03 00:56:54.583476 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:54.583494 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:54.583513 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:54.583530 | orchestrator | 2025-09-03 00:56:54.583542 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-03 00:56:54.583552 | orchestrator | Wednesday 03 September 2025 00:54:40 +0000 (0:00:00.743) 0:00:27.387 *** 2025-09-03 00:56:54.583562 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-03 00:56:54.583571 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-03 00:56:54.583581 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-03 00:56:54.583591 | orchestrator | 2025-09-03 00:56:54.583600 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-03 00:56:54.583610 | orchestrator | Wednesday 03 September 2025 00:54:41 +0000 (0:00:01.041) 0:00:28.429 *** 2025-09-03 00:56:54.583620 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:56:54.583629 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:56:54.583639 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:56:54.583649 | orchestrator | 2025-09-03 00:56:54.583658 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-03 00:56:54.583668 | orchestrator | Wednesday 03 September 2025 00:54:42 +0000 (0:00:00.295) 0:00:28.725 *** 2025-09-03 00:56:54.583678 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-03 00:56:54.583687 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-03 00:56:54.583697 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-03 00:56:54.583718 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-03 00:56:54.583728 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-03 00:56:54.583744 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-03 00:56:54.583754 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-03 00:56:54.583764 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-03 00:56:54.583774 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-03 00:56:54.583783 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-03 00:56:54.583793 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-03 00:56:54.583802 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-03 00:56:54.583812 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-03 00:56:54.583821 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-03 00:56:54.583831 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-03 00:56:54.583840 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-03 00:56:54.583850 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-03 00:56:54.583859 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-03 00:56:54.583869 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-03 00:56:54.583879 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-03 00:56:54.583888 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-03 00:56:54.583898 | orchestrator | 2025-09-03 00:56:54.583908 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-03 00:56:54.583917 | orchestrator | Wednesday 03 September 2025 00:54:50 +0000 (0:00:08.804) 0:00:37.529 *** 2025-09-03 00:56:54.583927 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-03 00:56:54.583936 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-03 00:56:54.584000 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-03 00:56:54.584010 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-03 00:56:54.584019 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-03 00:56:54.584029 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-03 00:56:54.584038 | orchestrator | 2025-09-03 00:56:54.584048 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-03 00:56:54.584057 | orchestrator | Wednesday 03 September 2025 00:54:53 +0000 (0:00:02.964) 0:00:40.494 *** 2025-09-03 00:56:54.584068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-03 00:56:54.584099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-03 00:56:54.584111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-03 00:56:54.584122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-03 00:56:54.584133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-03 00:56:54.584143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-03 00:56:54.584158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-03 00:56:54.584178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-03 00:56:54.584189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-03 00:56:54.584199 | orchestrator | 2025-09-03 00:56:54.584209 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-03 00:56:54.584219 | orchestrator | Wednesday 03 September 2025 00:54:56 +0000 (0:00:02.181) 0:00:42.675 *** 2025-09-03 00:56:54.584229 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:54.584238 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:54.584248 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:54.584258 | orchestrator | 2025-09-03 00:56:54.584267 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-03 00:56:54.584277 | orchestrator | Wednesday 03 September 2025 00:54:56 +0000 (0:00:00.302) 0:00:42.978 *** 2025-09-03 00:56:54.584287 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:56:54.584296 | orchestrator | 2025-09-03 00:56:54.584306 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-03 00:56:54.584315 | orchestrator | Wednesday 03 September 2025 00:54:58 +0000 (0:00:02.341) 0:00:45.319 *** 2025-09-03 00:56:54.584325 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:56:54.584334 | orchestrator | 2025-09-03 00:56:54.584344 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-03 00:56:54.584354 | orchestrator | Wednesday 03 September 2025 00:55:00 +0000 (0:00:02.150) 0:00:47.469 *** 2025-09-03 00:56:54.584363 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:56:54.584373 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:56:54.584383 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:56:54.584392 | orchestrator | 2025-09-03 00:56:54.584402 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-03 00:56:54.584411 | orchestrator | Wednesday 03 September 2025 00:55:01 +0000 (0:00:00.890) 0:00:48.360 *** 2025-09-03 00:56:54.584426 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:56:54.584436 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:56:54.584446 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:56:54.584456 | orchestrator | 2025-09-03 00:56:54.584465 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-03 00:56:54.584477 | orchestrator | Wednesday 03 September 2025 00:55:02 +0000 (0:00:00.514) 0:00:48.874 *** 2025-09-03 00:56:54.584494 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:54.584511 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:54.584527 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:54.584538 | orchestrator | 2025-09-03 00:56:54.584546 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-03 00:56:54.584554 | orchestrator | Wednesday 03 September 2025 00:55:02 +0000 (0:00:00.301) 0:00:49.176 *** 2025-09-03 00:56:54.584562 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:56:54.584570 | orchestrator | 2025-09-03 00:56:54.584578 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-03 00:56:54.584586 | orchestrator | Wednesday 03 September 2025 00:55:16 +0000 (0:00:13.571) 0:01:02.747 *** 2025-09-03 00:56:54.584594 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:56:54.584602 | orchestrator | 2025-09-03 00:56:54.584610 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-03 00:56:54.584617 | orchestrator | Wednesday 03 September 2025 00:55:26 +0000 (0:00:10.087) 0:01:12.834 *** 2025-09-03 00:56:54.584625 | orchestrator | 2025-09-03 00:56:54.584633 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-03 00:56:54.584641 | orchestrator | Wednesday 03 September 2025 00:55:26 +0000 (0:00:00.066) 0:01:12.901 *** 2025-09-03 00:56:54.584649 | orchestrator | 2025-09-03 00:56:54.584657 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-03 00:56:54.584664 | orchestrator | Wednesday 03 September 2025 00:55:26 +0000 (0:00:00.067) 0:01:12.968 *** 2025-09-03 00:56:54.584672 | orchestrator | 2025-09-03 00:56:54.584680 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-03 00:56:54.584688 | orchestrator | Wednesday 03 September 2025 00:55:26 +0000 (0:00:00.066) 0:01:13.035 *** 2025-09-03 00:56:54.584696 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:56:54.584704 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:56:54.584712 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:56:54.584720 | orchestrator | 2025-09-03 00:56:54.584727 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-03 00:56:54.584735 | orchestrator | Wednesday 03 September 2025 00:55:47 +0000 (0:00:21.000) 0:01:34.035 *** 2025-09-03 00:56:54.584743 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:56:54.584751 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:56:54.584763 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:56:54.584771 | orchestrator | 2025-09-03 00:56:54.584779 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-03 00:56:54.584787 | orchestrator | Wednesday 03 September 2025 00:55:57 +0000 (0:00:10.045) 0:01:44.081 *** 2025-09-03 00:56:54.584796 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:56:54.584804 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:56:54.584816 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:56:54.584825 | orchestrator | 2025-09-03 00:56:54.584833 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-03 00:56:54.584841 | orchestrator | Wednesday 03 September 2025 00:56:08 +0000 (0:00:11.299) 0:01:55.380 *** 2025-09-03 00:56:54.584848 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:56:54.584856 | orchestrator | 2025-09-03 00:56:54.584864 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-03 00:56:54.584872 | orchestrator | Wednesday 03 September 2025 00:56:09 +0000 (0:00:00.722) 0:01:56.103 *** 2025-09-03 00:56:54.584880 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:56:54.584893 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:56:54.584902 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:56:54.584909 | orchestrator | 2025-09-03 00:56:54.584917 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-03 00:56:54.584925 | orchestrator | Wednesday 03 September 2025 00:56:10 +0000 (0:00:00.781) 0:01:56.885 *** 2025-09-03 00:56:54.584933 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:56:54.584958 | orchestrator | 2025-09-03 00:56:54.584967 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-03 00:56:54.584975 | orchestrator | Wednesday 03 September 2025 00:56:12 +0000 (0:00:01.779) 0:01:58.664 *** 2025-09-03 00:56:54.584983 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-03 00:56:54.584991 | orchestrator | 2025-09-03 00:56:54.584999 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-03 00:56:54.585006 | orchestrator | Wednesday 03 September 2025 00:56:22 +0000 (0:00:10.277) 0:02:08.942 *** 2025-09-03 00:56:54.585014 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-03 00:56:54.585022 | orchestrator | 2025-09-03 00:56:54.585030 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-03 00:56:54.585038 | orchestrator | Wednesday 03 September 2025 00:56:43 +0000 (0:00:20.643) 0:02:29.585 *** 2025-09-03 00:56:54.585045 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-03 00:56:54.585053 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-03 00:56:54.585061 | orchestrator | 2025-09-03 00:56:54.585069 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-03 00:56:54.585077 | orchestrator | Wednesday 03 September 2025 00:56:49 +0000 (0:00:06.178) 0:02:35.764 *** 2025-09-03 00:56:54.585085 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:54.585093 | orchestrator | 2025-09-03 00:56:54.585101 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-03 00:56:54.585108 | orchestrator | Wednesday 03 September 2025 00:56:49 +0000 (0:00:00.097) 0:02:35.862 *** 2025-09-03 00:56:54.585116 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:54.585124 | orchestrator | 2025-09-03 00:56:54.585132 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-03 00:56:54.585140 | orchestrator | Wednesday 03 September 2025 00:56:49 +0000 (0:00:00.091) 0:02:35.953 *** 2025-09-03 00:56:54.585148 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:54.585156 | orchestrator | 2025-09-03 00:56:54.585163 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-03 00:56:54.585171 | orchestrator | Wednesday 03 September 2025 00:56:49 +0000 (0:00:00.125) 0:02:36.079 *** 2025-09-03 00:56:54.585179 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:54.585187 | orchestrator | 2025-09-03 00:56:54.585195 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-03 00:56:54.585203 | orchestrator | Wednesday 03 September 2025 00:56:49 +0000 (0:00:00.435) 0:02:36.514 *** 2025-09-03 00:56:54.585211 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:56:54.585218 | orchestrator | 2025-09-03 00:56:54.585227 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-03 00:56:54.585234 | orchestrator | Wednesday 03 September 2025 00:56:53 +0000 (0:00:03.244) 0:02:39.759 *** 2025-09-03 00:56:54.585242 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:56:54.585250 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:56:54.585258 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:56:54.585266 | orchestrator | 2025-09-03 00:56:54.585274 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:56:54.585282 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-03 00:56:54.585291 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-03 00:56:54.585304 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-03 00:56:54.585312 | orchestrator | 2025-09-03 00:56:54.585320 | orchestrator | 2025-09-03 00:56:54.585328 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:56:54.585336 | orchestrator | Wednesday 03 September 2025 00:56:53 +0000 (0:00:00.585) 0:02:40.344 *** 2025-09-03 00:56:54.585343 | orchestrator | =============================================================================== 2025-09-03 00:56:54.585351 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 21.00s 2025-09-03 00:56:54.585363 | orchestrator | service-ks-register : keystone | Creating services --------------------- 20.64s 2025-09-03 00:56:54.585371 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.57s 2025-09-03 00:56:54.585379 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.30s 2025-09-03 00:56:54.585387 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.28s 2025-09-03 00:56:54.585399 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.09s 2025-09-03 00:56:54.585407 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.05s 2025-09-03 00:56:54.585415 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.80s 2025-09-03 00:56:54.585423 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.18s 2025-09-03 00:56:54.585431 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.54s 2025-09-03 00:56:54.585439 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.28s 2025-09-03 00:56:54.585447 | orchestrator | keystone : Creating default user role ----------------------------------- 3.24s 2025-09-03 00:56:54.585455 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.07s 2025-09-03 00:56:54.585462 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.96s 2025-09-03 00:56:54.585470 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.38s 2025-09-03 00:56:54.585478 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.34s 2025-09-03 00:56:54.585486 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.18s 2025-09-03 00:56:54.585494 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.15s 2025-09-03 00:56:54.585502 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.78s 2025-09-03 00:56:54.585515 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.63s 2025-09-03 00:56:54.585529 | orchestrator | 2025-09-03 00:56:54 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:56:54.585544 | orchestrator | 2025-09-03 00:56:54 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:56:57.676488 | orchestrator | 2025-09-03 00:56:57 | INFO  | Task fe315f6a-6930-4183-8919-7507c45e2ce7 is in state STARTED 2025-09-03 00:56:57.676604 | orchestrator | 2025-09-03 00:56:57 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:56:57.676619 | orchestrator | 2025-09-03 00:56:57 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:56:57.676631 | orchestrator | 2025-09-03 00:56:57 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:56:57.676643 | orchestrator | 2025-09-03 00:56:57 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:56:57.676654 | orchestrator | 2025-09-03 00:56:57 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:57:00.634397 | orchestrator | 2025-09-03 00:57:00 | INFO  | Task fe315f6a-6930-4183-8919-7507c45e2ce7 is in state STARTED 2025-09-03 00:57:00.636288 | orchestrator | 2025-09-03 00:57:00 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:57:00.638623 | orchestrator | 2025-09-03 00:57:00 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:57:00.640026 | orchestrator | 2025-09-03 00:57:00 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:57:00.640909 | orchestrator | 2025-09-03 00:57:00 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:57:00.642350 | orchestrator | 2025-09-03 00:57:00 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:57:03.678677 | orchestrator | 2025-09-03 00:57:03 | INFO  | Task fe315f6a-6930-4183-8919-7507c45e2ce7 is in state STARTED 2025-09-03 00:57:03.679624 | orchestrator | 2025-09-03 00:57:03 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:57:03.682381 | orchestrator | 2025-09-03 00:57:03 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:57:03.682459 | orchestrator | 2025-09-03 00:57:03 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:57:03.682810 | orchestrator | 2025-09-03 00:57:03 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:57:03.683047 | orchestrator | 2025-09-03 00:57:03 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:57:06.705816 | orchestrator | 2025-09-03 00:57:06 | INFO  | Task fe315f6a-6930-4183-8919-7507c45e2ce7 is in state STARTED 2025-09-03 00:57:06.706716 | orchestrator | 2025-09-03 00:57:06 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:57:06.707382 | orchestrator | 2025-09-03 00:57:06 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:57:06.708123 | orchestrator | 2025-09-03 00:57:06 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:57:06.708751 | orchestrator | 2025-09-03 00:57:06 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:57:06.708880 | orchestrator | 2025-09-03 00:57:06 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:57:09.739817 | orchestrator | 2025-09-03 00:57:09 | INFO  | Task fe315f6a-6930-4183-8919-7507c45e2ce7 is in state STARTED 2025-09-03 00:57:09.741060 | orchestrator | 2025-09-03 00:57:09 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:57:09.742697 | orchestrator | 2025-09-03 00:57:09 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:57:09.743730 | orchestrator | 2025-09-03 00:57:09 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:57:09.744988 | orchestrator | 2025-09-03 00:57:09 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:57:09.745179 | orchestrator | 2025-09-03 00:57:09 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:57:12.776902 | orchestrator | 2025-09-03 00:57:12 | INFO  | Task fe315f6a-6930-4183-8919-7507c45e2ce7 is in state STARTED 2025-09-03 00:57:12.777221 | orchestrator | 2025-09-03 00:57:12 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:57:12.778147 | orchestrator | 2025-09-03 00:57:12 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:57:12.778645 | orchestrator | 2025-09-03 00:57:12 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:57:12.779533 | orchestrator | 2025-09-03 00:57:12 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:57:12.779677 | orchestrator | 2025-09-03 00:57:12 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:57:16.173531 | orchestrator | 2025-09-03 00:57:15 | INFO  | Task fe315f6a-6930-4183-8919-7507c45e2ce7 is in state STARTED 2025-09-03 00:57:16.173630 | orchestrator | 2025-09-03 00:57:15 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:57:16.173646 | orchestrator | 2025-09-03 00:57:15 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:57:16.173658 | orchestrator | 2025-09-03 00:57:15 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:57:16.173669 | orchestrator | 2025-09-03 00:57:15 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:57:16.173685 | orchestrator | 2025-09-03 00:57:15 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:57:18.834749 | orchestrator | 2025-09-03 00:57:18 | INFO  | Task fe315f6a-6930-4183-8919-7507c45e2ce7 is in state STARTED 2025-09-03 00:57:18.835018 | orchestrator | 2025-09-03 00:57:18 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:57:18.835604 | orchestrator | 2025-09-03 00:57:18 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:57:18.836280 | orchestrator | 2025-09-03 00:57:18 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:57:18.836959 | orchestrator | 2025-09-03 00:57:18 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:57:18.837061 | orchestrator | 2025-09-03 00:57:18 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:57:21.870530 | orchestrator | 2025-09-03 00:57:21 | INFO  | Task fe315f6a-6930-4183-8919-7507c45e2ce7 is in state STARTED 2025-09-03 00:57:21.870640 | orchestrator | 2025-09-03 00:57:21 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:57:21.870656 | orchestrator | 2025-09-03 00:57:21 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:57:21.870668 | orchestrator | 2025-09-03 00:57:21 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:57:21.870679 | orchestrator | 2025-09-03 00:57:21 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:57:21.870690 | orchestrator | 2025-09-03 00:57:21 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:57:24.900474 | orchestrator | 2025-09-03 00:57:24 | INFO  | Task fe315f6a-6930-4183-8919-7507c45e2ce7 is in state SUCCESS 2025-09-03 00:57:24.900578 | orchestrator | 2025-09-03 00:57:24 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:57:24.901075 | orchestrator | 2025-09-03 00:57:24 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:57:24.901589 | orchestrator | 2025-09-03 00:57:24 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:57:24.903181 | orchestrator | 2025-09-03 00:57:24 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:57:24.903208 | orchestrator | 2025-09-03 00:57:24 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:57:27.927816 | orchestrator | 2025-09-03 00:57:27 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:57:27.927928 | orchestrator | 2025-09-03 00:57:27 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:57:27.928370 | orchestrator | 2025-09-03 00:57:27 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:57:27.929867 | orchestrator | 2025-09-03 00:57:27 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:57:27.929889 | orchestrator | 2025-09-03 00:57:27 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:57:27.929901 | orchestrator | 2025-09-03 00:57:27 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:57:30.956157 | orchestrator | 2025-09-03 00:57:30 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:57:30.957669 | orchestrator | 2025-09-03 00:57:30 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:57:30.961238 | orchestrator | 2025-09-03 00:57:30 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:57:30.962247 | orchestrator | 2025-09-03 00:57:30 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:57:30.963048 | orchestrator | 2025-09-03 00:57:30 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:57:30.963276 | orchestrator | 2025-09-03 00:57:30 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:57:33.990007 | orchestrator | 2025-09-03 00:57:33 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:57:33.991553 | orchestrator | 2025-09-03 00:57:33 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:57:33.992171 | orchestrator | 2025-09-03 00:57:33 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:57:33.992880 | orchestrator | 2025-09-03 00:57:33 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:57:33.993564 | orchestrator | 2025-09-03 00:57:33 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:57:33.993766 | orchestrator | 2025-09-03 00:57:33 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:57:37.026885 | orchestrator | 2025-09-03 00:57:37 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:57:37.027043 | orchestrator | 2025-09-03 00:57:37 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:57:37.030390 | orchestrator | 2025-09-03 00:57:37 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:57:37.030874 | orchestrator | 2025-09-03 00:57:37 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:57:37.031486 | orchestrator | 2025-09-03 00:57:37 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:57:37.031509 | orchestrator | 2025-09-03 00:57:37 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:57:40.052316 | orchestrator | 2025-09-03 00:57:40 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:57:40.052430 | orchestrator | 2025-09-03 00:57:40 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:57:40.052859 | orchestrator | 2025-09-03 00:57:40 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:57:40.053713 | orchestrator | 2025-09-03 00:57:40 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:57:40.053997 | orchestrator | 2025-09-03 00:57:40 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:57:40.054073 | orchestrator | 2025-09-03 00:57:40 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:57:43.074387 | orchestrator | 2025-09-03 00:57:43 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:57:43.074510 | orchestrator | 2025-09-03 00:57:43 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:57:43.074970 | orchestrator | 2025-09-03 00:57:43 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:57:43.075335 | orchestrator | 2025-09-03 00:57:43 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:57:43.075883 | orchestrator | 2025-09-03 00:57:43 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:57:43.075901 | orchestrator | 2025-09-03 00:57:43 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:57:46.097649 | orchestrator | 2025-09-03 00:57:46 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:57:46.097768 | orchestrator | 2025-09-03 00:57:46 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:57:46.098300 | orchestrator | 2025-09-03 00:57:46 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:57:46.098901 | orchestrator | 2025-09-03 00:57:46 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:57:46.100101 | orchestrator | 2025-09-03 00:57:46 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:57:46.100125 | orchestrator | 2025-09-03 00:57:46 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:57:49.120538 | orchestrator | 2025-09-03 00:57:49 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:57:49.120641 | orchestrator | 2025-09-03 00:57:49 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:57:49.121145 | orchestrator | 2025-09-03 00:57:49 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:57:49.121550 | orchestrator | 2025-09-03 00:57:49 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:57:49.122366 | orchestrator | 2025-09-03 00:57:49 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:57:49.122450 | orchestrator | 2025-09-03 00:57:49 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:57:52.151810 | orchestrator | 2025-09-03 00:57:52 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:57:52.152446 | orchestrator | 2025-09-03 00:57:52 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:57:52.153679 | orchestrator | 2025-09-03 00:57:52 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:57:52.154712 | orchestrator | 2025-09-03 00:57:52 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:57:52.156159 | orchestrator | 2025-09-03 00:57:52 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:57:52.156254 | orchestrator | 2025-09-03 00:57:52 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:57:55.208879 | orchestrator | 2025-09-03 00:57:55 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:57:55.209267 | orchestrator | 2025-09-03 00:57:55 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:57:55.209858 | orchestrator | 2025-09-03 00:57:55 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:57:55.210491 | orchestrator | 2025-09-03 00:57:55 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:57:55.211183 | orchestrator | 2025-09-03 00:57:55 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:57:55.211879 | orchestrator | 2025-09-03 00:57:55 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:57:58.241280 | orchestrator | 2025-09-03 00:57:58 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:57:58.241410 | orchestrator | 2025-09-03 00:57:58 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:57:58.241608 | orchestrator | 2025-09-03 00:57:58 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:57:58.242176 | orchestrator | 2025-09-03 00:57:58 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:57:58.243126 | orchestrator | 2025-09-03 00:57:58 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:57:58.243170 | orchestrator | 2025-09-03 00:57:58 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:58:01.267484 | orchestrator | 2025-09-03 00:58:01 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:58:01.267606 | orchestrator | 2025-09-03 00:58:01 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:58:01.267812 | orchestrator | 2025-09-03 00:58:01 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:58:01.268268 | orchestrator | 2025-09-03 00:58:01 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:58:01.268877 | orchestrator | 2025-09-03 00:58:01 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:58:01.268910 | orchestrator | 2025-09-03 00:58:01 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:58:04.292307 | orchestrator | 2025-09-03 00:58:04 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:58:04.292419 | orchestrator | 2025-09-03 00:58:04 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state STARTED 2025-09-03 00:58:04.292769 | orchestrator | 2025-09-03 00:58:04 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:58:04.293346 | orchestrator | 2025-09-03 00:58:04 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:58:04.294088 | orchestrator | 2025-09-03 00:58:04 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:58:04.294115 | orchestrator | 2025-09-03 00:58:04 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:58:07.317962 | orchestrator | 2025-09-03 00:58:07 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:58:07.318317 | orchestrator | 2025-09-03 00:58:07 | INFO  | Task d92208f5-272d-4d8b-a62e-9079936e3103 is in state SUCCESS 2025-09-03 00:58:07.318353 | orchestrator | 2025-09-03 00:58:07.318367 | orchestrator | 2025-09-03 00:58:07.318379 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 00:58:07.318391 | orchestrator | 2025-09-03 00:58:07.318402 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 00:58:07.318413 | orchestrator | Wednesday 03 September 2025 00:56:54 +0000 (0:00:00.221) 0:00:00.221 *** 2025-09-03 00:58:07.318425 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:58:07.318439 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:58:07.318451 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:58:07.318462 | orchestrator | ok: [testbed-manager] 2025-09-03 00:58:07.318473 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:58:07.318484 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:58:07.318496 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:58:07.318507 | orchestrator | 2025-09-03 00:58:07.318518 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 00:58:07.318530 | orchestrator | Wednesday 03 September 2025 00:56:55 +0000 (0:00:00.987) 0:00:01.209 *** 2025-09-03 00:58:07.318541 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-03 00:58:07.318583 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-03 00:58:07.318595 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-03 00:58:07.318607 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-03 00:58:07.318618 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-03 00:58:07.318629 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-03 00:58:07.318640 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-03 00:58:07.318652 | orchestrator | 2025-09-03 00:58:07.318663 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-03 00:58:07.318675 | orchestrator | 2025-09-03 00:58:07.318686 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-03 00:58:07.318779 | orchestrator | Wednesday 03 September 2025 00:56:56 +0000 (0:00:01.312) 0:00:02.522 *** 2025-09-03 00:58:07.318798 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:58:07.318811 | orchestrator | 2025-09-03 00:58:07.318822 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-03 00:58:07.318833 | orchestrator | Wednesday 03 September 2025 00:56:58 +0000 (0:00:02.142) 0:00:04.664 *** 2025-09-03 00:58:07.318844 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-09-03 00:58:07.318854 | orchestrator | 2025-09-03 00:58:07.318866 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-03 00:58:07.318877 | orchestrator | Wednesday 03 September 2025 00:57:02 +0000 (0:00:03.770) 0:00:08.435 *** 2025-09-03 00:58:07.318888 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-03 00:58:07.318901 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-03 00:58:07.318912 | orchestrator | 2025-09-03 00:58:07.318923 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-03 00:58:07.318957 | orchestrator | Wednesday 03 September 2025 00:57:07 +0000 (0:00:05.651) 0:00:14.086 *** 2025-09-03 00:58:07.318969 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-03 00:58:07.318980 | orchestrator | 2025-09-03 00:58:07.318991 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-03 00:58:07.319002 | orchestrator | Wednesday 03 September 2025 00:57:10 +0000 (0:00:02.676) 0:00:16.763 *** 2025-09-03 00:58:07.319026 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-03 00:58:07.319038 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-09-03 00:58:07.319049 | orchestrator | 2025-09-03 00:58:07.319060 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-03 00:58:07.319071 | orchestrator | Wednesday 03 September 2025 00:57:14 +0000 (0:00:03.388) 0:00:20.151 *** 2025-09-03 00:58:07.319081 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-03 00:58:07.319093 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-09-03 00:58:07.319104 | orchestrator | 2025-09-03 00:58:07.319115 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-03 00:58:07.319126 | orchestrator | Wednesday 03 September 2025 00:57:19 +0000 (0:00:05.518) 0:00:25.669 *** 2025-09-03 00:58:07.319136 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-09-03 00:58:07.319147 | orchestrator | 2025-09-03 00:58:07.319158 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:58:07.319169 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:58:07.319181 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:58:07.319204 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:58:07.319215 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:58:07.319226 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:58:07.319249 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:58:07.319261 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:58:07.319272 | orchestrator | 2025-09-03 00:58:07.319283 | orchestrator | 2025-09-03 00:58:07.319294 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:58:07.319385 | orchestrator | Wednesday 03 September 2025 00:57:24 +0000 (0:00:04.916) 0:00:30.585 *** 2025-09-03 00:58:07.319402 | orchestrator | =============================================================================== 2025-09-03 00:58:07.319413 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.65s 2025-09-03 00:58:07.319424 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.52s 2025-09-03 00:58:07.319435 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.92s 2025-09-03 00:58:07.319446 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.77s 2025-09-03 00:58:07.319457 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.39s 2025-09-03 00:58:07.319468 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.68s 2025-09-03 00:58:07.319479 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.14s 2025-09-03 00:58:07.319490 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.31s 2025-09-03 00:58:07.319501 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.99s 2025-09-03 00:58:07.319512 | orchestrator | 2025-09-03 00:58:07.319523 | orchestrator | 2025-09-03 00:58:07.319534 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-03 00:58:07.319545 | orchestrator | 2025-09-03 00:58:07.319556 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-03 00:58:07.319567 | orchestrator | Wednesday 03 September 2025 00:56:48 +0000 (0:00:00.272) 0:00:00.272 *** 2025-09-03 00:58:07.319578 | orchestrator | changed: [testbed-manager] 2025-09-03 00:58:07.319589 | orchestrator | 2025-09-03 00:58:07.319600 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-03 00:58:07.319611 | orchestrator | Wednesday 03 September 2025 00:56:50 +0000 (0:00:02.011) 0:00:02.284 *** 2025-09-03 00:58:07.319621 | orchestrator | changed: [testbed-manager] 2025-09-03 00:58:07.319632 | orchestrator | 2025-09-03 00:58:07.319643 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-03 00:58:07.319654 | orchestrator | Wednesday 03 September 2025 00:56:50 +0000 (0:00:00.882) 0:00:03.166 *** 2025-09-03 00:58:07.319665 | orchestrator | changed: [testbed-manager] 2025-09-03 00:58:07.319676 | orchestrator | 2025-09-03 00:58:07.319687 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-03 00:58:07.319698 | orchestrator | Wednesday 03 September 2025 00:56:52 +0000 (0:00:01.292) 0:00:04.459 *** 2025-09-03 00:58:07.319709 | orchestrator | changed: [testbed-manager] 2025-09-03 00:58:07.319719 | orchestrator | 2025-09-03 00:58:07.319730 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-03 00:58:07.319741 | orchestrator | Wednesday 03 September 2025 00:56:53 +0000 (0:00:01.069) 0:00:05.528 *** 2025-09-03 00:58:07.319752 | orchestrator | changed: [testbed-manager] 2025-09-03 00:58:07.319763 | orchestrator | 2025-09-03 00:58:07.319774 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-03 00:58:07.319794 | orchestrator | Wednesday 03 September 2025 00:56:54 +0000 (0:00:00.981) 0:00:06.510 *** 2025-09-03 00:58:07.319805 | orchestrator | changed: [testbed-manager] 2025-09-03 00:58:07.319816 | orchestrator | 2025-09-03 00:58:07.319827 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-03 00:58:07.319844 | orchestrator | Wednesday 03 September 2025 00:56:55 +0000 (0:00:00.941) 0:00:07.452 *** 2025-09-03 00:58:07.319855 | orchestrator | changed: [testbed-manager] 2025-09-03 00:58:07.319866 | orchestrator | 2025-09-03 00:58:07.319877 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-03 00:58:07.319888 | orchestrator | Wednesday 03 September 2025 00:56:56 +0000 (0:00:01.140) 0:00:08.593 *** 2025-09-03 00:58:07.319899 | orchestrator | changed: [testbed-manager] 2025-09-03 00:58:07.319910 | orchestrator | 2025-09-03 00:58:07.319921 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-03 00:58:07.319957 | orchestrator | Wednesday 03 September 2025 00:56:57 +0000 (0:00:01.033) 0:00:09.626 *** 2025-09-03 00:58:07.319968 | orchestrator | changed: [testbed-manager] 2025-09-03 00:58:07.319979 | orchestrator | 2025-09-03 00:58:07.319990 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-03 00:58:07.320001 | orchestrator | Wednesday 03 September 2025 00:57:41 +0000 (0:00:43.895) 0:00:53.521 *** 2025-09-03 00:58:07.320012 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:58:07.320023 | orchestrator | 2025-09-03 00:58:07.320034 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-03 00:58:07.320044 | orchestrator | 2025-09-03 00:58:07.320055 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-03 00:58:07.320066 | orchestrator | Wednesday 03 September 2025 00:57:41 +0000 (0:00:00.117) 0:00:53.639 *** 2025-09-03 00:58:07.320077 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:58:07.320088 | orchestrator | 2025-09-03 00:58:07.320098 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-03 00:58:07.320109 | orchestrator | 2025-09-03 00:58:07.320120 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-03 00:58:07.320131 | orchestrator | Wednesday 03 September 2025 00:57:52 +0000 (0:00:11.493) 0:01:05.133 *** 2025-09-03 00:58:07.320142 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:58:07.320153 | orchestrator | 2025-09-03 00:58:07.320164 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-03 00:58:07.320174 | orchestrator | 2025-09-03 00:58:07.320185 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-03 00:58:07.320196 | orchestrator | Wednesday 03 September 2025 00:57:54 +0000 (0:00:01.204) 0:01:06.337 *** 2025-09-03 00:58:07.320208 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:58:07.320218 | orchestrator | 2025-09-03 00:58:07.320238 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:58:07.320249 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-03 00:58:07.320261 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:58:07.320272 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:58:07.320283 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 00:58:07.320294 | orchestrator | 2025-09-03 00:58:07.320305 | orchestrator | 2025-09-03 00:58:07.320316 | orchestrator | 2025-09-03 00:58:07.320327 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:58:07.320338 | orchestrator | Wednesday 03 September 2025 00:58:05 +0000 (0:00:11.443) 0:01:17.780 *** 2025-09-03 00:58:07.320356 | orchestrator | =============================================================================== 2025-09-03 00:58:07.320368 | orchestrator | Create admin user ------------------------------------------------------ 43.90s 2025-09-03 00:58:07.320378 | orchestrator | Restart ceph manager service ------------------------------------------- 24.14s 2025-09-03 00:58:07.320389 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.01s 2025-09-03 00:58:07.320400 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.29s 2025-09-03 00:58:07.320411 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.14s 2025-09-03 00:58:07.320422 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.07s 2025-09-03 00:58:07.320433 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.03s 2025-09-03 00:58:07.320444 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.98s 2025-09-03 00:58:07.320455 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.94s 2025-09-03 00:58:07.320465 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.88s 2025-09-03 00:58:07.320476 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.12s 2025-09-03 00:58:07.320488 | orchestrator | 2025-09-03 00:58:07 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:58:07.320499 | orchestrator | 2025-09-03 00:58:07 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:58:07.320510 | orchestrator | 2025-09-03 00:58:07 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:58:07.320521 | orchestrator | 2025-09-03 00:58:07 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:58:10.342491 | orchestrator | 2025-09-03 00:58:10 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:58:10.343479 | orchestrator | 2025-09-03 00:58:10 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:58:10.344221 | orchestrator | 2025-09-03 00:58:10 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:58:10.346628 | orchestrator | 2025-09-03 00:58:10 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:58:10.346664 | orchestrator | 2025-09-03 00:58:10 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:58:13.368907 | orchestrator | 2025-09-03 00:58:13 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:58:13.369051 | orchestrator | 2025-09-03 00:58:13 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:58:13.369770 | orchestrator | 2025-09-03 00:58:13 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:58:13.370177 | orchestrator | 2025-09-03 00:58:13 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:58:13.370199 | orchestrator | 2025-09-03 00:58:13 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:58:16.398678 | orchestrator | 2025-09-03 00:58:16 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:58:16.400237 | orchestrator | 2025-09-03 00:58:16 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:58:16.400541 | orchestrator | 2025-09-03 00:58:16 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:58:16.401150 | orchestrator | 2025-09-03 00:58:16 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:58:16.401163 | orchestrator | 2025-09-03 00:58:16 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:58:19.429329 | orchestrator | 2025-09-03 00:58:19 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:58:19.429491 | orchestrator | 2025-09-03 00:58:19 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:58:19.430115 | orchestrator | 2025-09-03 00:58:19 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:58:19.430661 | orchestrator | 2025-09-03 00:58:19 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:58:19.430684 | orchestrator | 2025-09-03 00:58:19 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:58:22.472510 | orchestrator | 2025-09-03 00:58:22 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:58:22.472638 | orchestrator | 2025-09-03 00:58:22 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:58:22.473139 | orchestrator | 2025-09-03 00:58:22 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:58:22.473660 | orchestrator | 2025-09-03 00:58:22 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:58:22.473683 | orchestrator | 2025-09-03 00:58:22 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:58:25.503712 | orchestrator | 2025-09-03 00:58:25 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:58:25.505563 | orchestrator | 2025-09-03 00:58:25 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:58:25.507258 | orchestrator | 2025-09-03 00:58:25 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:58:25.508536 | orchestrator | 2025-09-03 00:58:25 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:58:25.509227 | orchestrator | 2025-09-03 00:58:25 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:58:28.541228 | orchestrator | 2025-09-03 00:58:28 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:58:28.542300 | orchestrator | 2025-09-03 00:58:28 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:58:28.544079 | orchestrator | 2025-09-03 00:58:28 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:58:28.546013 | orchestrator | 2025-09-03 00:58:28 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:58:28.546578 | orchestrator | 2025-09-03 00:58:28 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:58:31.587351 | orchestrator | 2025-09-03 00:58:31 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:58:31.588323 | orchestrator | 2025-09-03 00:58:31 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:58:31.590443 | orchestrator | 2025-09-03 00:58:31 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:58:31.591529 | orchestrator | 2025-09-03 00:58:31 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:58:31.591907 | orchestrator | 2025-09-03 00:58:31 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:58:34.634809 | orchestrator | 2025-09-03 00:58:34 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:58:34.637190 | orchestrator | 2025-09-03 00:58:34 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:58:34.638928 | orchestrator | 2025-09-03 00:58:34 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:58:34.640921 | orchestrator | 2025-09-03 00:58:34 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:58:34.641044 | orchestrator | 2025-09-03 00:58:34 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:58:37.680401 | orchestrator | 2025-09-03 00:58:37 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:58:37.680492 | orchestrator | 2025-09-03 00:58:37 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:58:37.681208 | orchestrator | 2025-09-03 00:58:37 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:58:37.682212 | orchestrator | 2025-09-03 00:58:37 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:58:37.682227 | orchestrator | 2025-09-03 00:58:37 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:58:40.730458 | orchestrator | 2025-09-03 00:58:40 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:58:40.731374 | orchestrator | 2025-09-03 00:58:40 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:58:40.731428 | orchestrator | 2025-09-03 00:58:40 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:58:40.731990 | orchestrator | 2025-09-03 00:58:40 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:58:40.732038 | orchestrator | 2025-09-03 00:58:40 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:58:43.761565 | orchestrator | 2025-09-03 00:58:43 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:58:43.761678 | orchestrator | 2025-09-03 00:58:43 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:58:43.763112 | orchestrator | 2025-09-03 00:58:43 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:58:43.763622 | orchestrator | 2025-09-03 00:58:43 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:58:43.763646 | orchestrator | 2025-09-03 00:58:43 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:58:46.806508 | orchestrator | 2025-09-03 00:58:46 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:58:46.807531 | orchestrator | 2025-09-03 00:58:46 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:58:46.809078 | orchestrator | 2025-09-03 00:58:46 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:58:46.810395 | orchestrator | 2025-09-03 00:58:46 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:58:46.810509 | orchestrator | 2025-09-03 00:58:46 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:58:49.848382 | orchestrator | 2025-09-03 00:58:49 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:58:49.851186 | orchestrator | 2025-09-03 00:58:49 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:58:49.852870 | orchestrator | 2025-09-03 00:58:49 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:58:49.855587 | orchestrator | 2025-09-03 00:58:49 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:58:49.855611 | orchestrator | 2025-09-03 00:58:49 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:58:52.906565 | orchestrator | 2025-09-03 00:58:52 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:58:52.908178 | orchestrator | 2025-09-03 00:58:52 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:58:52.910515 | orchestrator | 2025-09-03 00:58:52 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:58:52.912514 | orchestrator | 2025-09-03 00:58:52 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:58:52.912538 | orchestrator | 2025-09-03 00:58:52 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:58:55.948089 | orchestrator | 2025-09-03 00:58:55 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:58:55.951364 | orchestrator | 2025-09-03 00:58:55 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:58:55.951887 | orchestrator | 2025-09-03 00:58:55 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:58:55.955345 | orchestrator | 2025-09-03 00:58:55 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:58:55.955783 | orchestrator | 2025-09-03 00:58:55 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:58:59.006575 | orchestrator | 2025-09-03 00:58:59 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:58:59.006716 | orchestrator | 2025-09-03 00:58:59 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:58:59.008053 | orchestrator | 2025-09-03 00:58:59 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:58:59.008597 | orchestrator | 2025-09-03 00:58:59 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:58:59.008774 | orchestrator | 2025-09-03 00:58:59 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:59:02.039553 | orchestrator | 2025-09-03 00:59:02 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:59:02.039691 | orchestrator | 2025-09-03 00:59:02 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:59:02.040455 | orchestrator | 2025-09-03 00:59:02 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:59:02.041278 | orchestrator | 2025-09-03 00:59:02 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:59:02.041301 | orchestrator | 2025-09-03 00:59:02 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:59:05.072250 | orchestrator | 2025-09-03 00:59:05 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:59:05.072369 | orchestrator | 2025-09-03 00:59:05 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:59:05.072384 | orchestrator | 2025-09-03 00:59:05 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:59:05.072397 | orchestrator | 2025-09-03 00:59:05 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:59:05.072409 | orchestrator | 2025-09-03 00:59:05 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:59:08.097651 | orchestrator | 2025-09-03 00:59:08 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:59:08.098746 | orchestrator | 2025-09-03 00:59:08 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:59:08.099516 | orchestrator | 2025-09-03 00:59:08 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:59:08.100234 | orchestrator | 2025-09-03 00:59:08 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:59:08.100258 | orchestrator | 2025-09-03 00:59:08 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:59:11.131806 | orchestrator | 2025-09-03 00:59:11 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:59:11.132548 | orchestrator | 2025-09-03 00:59:11 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:59:11.133076 | orchestrator | 2025-09-03 00:59:11 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:59:11.133570 | orchestrator | 2025-09-03 00:59:11 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:59:11.133666 | orchestrator | 2025-09-03 00:59:11 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:59:14.163214 | orchestrator | 2025-09-03 00:59:14 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:59:14.163313 | orchestrator | 2025-09-03 00:59:14 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:59:14.163560 | orchestrator | 2025-09-03 00:59:14 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:59:14.164000 | orchestrator | 2025-09-03 00:59:14 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:59:14.164039 | orchestrator | 2025-09-03 00:59:14 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:59:17.201290 | orchestrator | 2025-09-03 00:59:17 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:59:17.202143 | orchestrator | 2025-09-03 00:59:17 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:59:17.202713 | orchestrator | 2025-09-03 00:59:17 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:59:17.204380 | orchestrator | 2025-09-03 00:59:17 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:59:17.204403 | orchestrator | 2025-09-03 00:59:17 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:59:20.265546 | orchestrator | 2025-09-03 00:59:20 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:59:20.267499 | orchestrator | 2025-09-03 00:59:20 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:59:20.268880 | orchestrator | 2025-09-03 00:59:20 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:59:20.270359 | orchestrator | 2025-09-03 00:59:20 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:59:20.270496 | orchestrator | 2025-09-03 00:59:20 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:59:23.308128 | orchestrator | 2025-09-03 00:59:23 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:59:23.309322 | orchestrator | 2025-09-03 00:59:23 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:59:23.310827 | orchestrator | 2025-09-03 00:59:23 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:59:23.312108 | orchestrator | 2025-09-03 00:59:23 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:59:23.312140 | orchestrator | 2025-09-03 00:59:23 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:59:26.356882 | orchestrator | 2025-09-03 00:59:26 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:59:26.358223 | orchestrator | 2025-09-03 00:59:26 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:59:26.360044 | orchestrator | 2025-09-03 00:59:26 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:59:26.361429 | orchestrator | 2025-09-03 00:59:26 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:59:26.361629 | orchestrator | 2025-09-03 00:59:26 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:59:29.404031 | orchestrator | 2025-09-03 00:59:29 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:59:29.405862 | orchestrator | 2025-09-03 00:59:29 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:59:29.406679 | orchestrator | 2025-09-03 00:59:29 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:59:29.408093 | orchestrator | 2025-09-03 00:59:29 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state STARTED 2025-09-03 00:59:29.408112 | orchestrator | 2025-09-03 00:59:29 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:59:32.458443 | orchestrator | 2025-09-03 00:59:32 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:59:32.459201 | orchestrator | 2025-09-03 00:59:32 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:59:32.460990 | orchestrator | 2025-09-03 00:59:32 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:59:32.461686 | orchestrator | 2025-09-03 00:59:32 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 00:59:32.463182 | orchestrator | 2025-09-03 00:59:32 | INFO  | Task 2002b55e-596e-4b28-b07c-d56e5f7a86a3 is in state SUCCESS 2025-09-03 00:59:32.467336 | orchestrator | 2025-09-03 00:59:32.467374 | orchestrator | 2025-09-03 00:59:32.467387 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 00:59:32.467400 | orchestrator | 2025-09-03 00:59:32.467411 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 00:59:32.467423 | orchestrator | Wednesday 03 September 2025 00:56:54 +0000 (0:00:00.300) 0:00:00.300 *** 2025-09-03 00:59:32.467434 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:59:32.467448 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:59:32.467460 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:59:32.467471 | orchestrator | 2025-09-03 00:59:32.467482 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 00:59:32.467493 | orchestrator | Wednesday 03 September 2025 00:56:55 +0000 (0:00:00.474) 0:00:00.775 *** 2025-09-03 00:59:32.467521 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-03 00:59:32.467534 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-03 00:59:32.467545 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-03 00:59:32.467556 | orchestrator | 2025-09-03 00:59:32.467567 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-03 00:59:32.467577 | orchestrator | 2025-09-03 00:59:32.467588 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-03 00:59:32.467599 | orchestrator | Wednesday 03 September 2025 00:56:55 +0000 (0:00:00.492) 0:00:01.267 *** 2025-09-03 00:59:32.467610 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:59:32.467621 | orchestrator | 2025-09-03 00:59:32.467632 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-03 00:59:32.467643 | orchestrator | Wednesday 03 September 2025 00:56:56 +0000 (0:00:00.674) 0:00:01.942 *** 2025-09-03 00:59:32.467653 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-03 00:59:32.467664 | orchestrator | 2025-09-03 00:59:32.467675 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-03 00:59:32.467686 | orchestrator | Wednesday 03 September 2025 00:57:00 +0000 (0:00:03.970) 0:00:05.912 *** 2025-09-03 00:59:32.467696 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-03 00:59:32.467707 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-03 00:59:32.467739 | orchestrator | 2025-09-03 00:59:32.467751 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-03 00:59:32.467762 | orchestrator | Wednesday 03 September 2025 00:57:06 +0000 (0:00:05.902) 0:00:11.814 *** 2025-09-03 00:59:32.467773 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-03 00:59:32.467784 | orchestrator | 2025-09-03 00:59:32.467795 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-03 00:59:32.467806 | orchestrator | Wednesday 03 September 2025 00:57:08 +0000 (0:00:02.763) 0:00:14.578 *** 2025-09-03 00:59:32.467817 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-03 00:59:32.467828 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-03 00:59:32.467839 | orchestrator | 2025-09-03 00:59:32.467850 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-03 00:59:32.467861 | orchestrator | Wednesday 03 September 2025 00:57:12 +0000 (0:00:03.585) 0:00:18.164 *** 2025-09-03 00:59:32.467872 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-03 00:59:32.467883 | orchestrator | 2025-09-03 00:59:32.467893 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-03 00:59:32.467904 | orchestrator | Wednesday 03 September 2025 00:57:15 +0000 (0:00:02.830) 0:00:20.994 *** 2025-09-03 00:59:32.467915 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-03 00:59:32.467949 | orchestrator | 2025-09-03 00:59:32.467963 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-03 00:59:32.467976 | orchestrator | Wednesday 03 September 2025 00:57:19 +0000 (0:00:04.002) 0:00:24.997 *** 2025-09-03 00:59:32.468012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-03 00:59:32.468033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-03 00:59:32.468154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-03 00:59:32.468181 | orchestrator | 2025-09-03 00:59:32.468195 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-03 00:59:32.468208 | orchestrator | Wednesday 03 September 2025 00:57:23 +0000 (0:00:04.098) 0:00:29.095 *** 2025-09-03 00:59:32.468221 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:59:32.468235 | orchestrator | 2025-09-03 00:59:32.468257 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-03 00:59:32.468271 | orchestrator | Wednesday 03 September 2025 00:57:23 +0000 (0:00:00.488) 0:00:29.583 *** 2025-09-03 00:59:32.468284 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:59:32.468297 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:59:32.468308 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:59:32.468319 | orchestrator | 2025-09-03 00:59:32.468330 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-03 00:59:32.468342 | orchestrator | Wednesday 03 September 2025 00:57:27 +0000 (0:00:04.102) 0:00:33.686 *** 2025-09-03 00:59:32.468352 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-03 00:59:32.468369 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-03 00:59:32.468387 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-03 00:59:32.468398 | orchestrator | 2025-09-03 00:59:32.468409 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-03 00:59:32.468420 | orchestrator | Wednesday 03 September 2025 00:57:29 +0000 (0:00:01.383) 0:00:35.070 *** 2025-09-03 00:59:32.468431 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-03 00:59:32.468442 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-03 00:59:32.468453 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-03 00:59:32.468464 | orchestrator | 2025-09-03 00:59:32.468475 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-03 00:59:32.468486 | orchestrator | Wednesday 03 September 2025 00:57:30 +0000 (0:00:01.181) 0:00:36.252 *** 2025-09-03 00:59:32.468497 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:59:32.468508 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:59:32.468519 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:59:32.468530 | orchestrator | 2025-09-03 00:59:32.468541 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-03 00:59:32.468552 | orchestrator | Wednesday 03 September 2025 00:57:31 +0000 (0:00:00.585) 0:00:36.837 *** 2025-09-03 00:59:32.468563 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:59:32.468574 | orchestrator | 2025-09-03 00:59:32.468585 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-03 00:59:32.468596 | orchestrator | Wednesday 03 September 2025 00:57:31 +0000 (0:00:00.267) 0:00:37.105 *** 2025-09-03 00:59:32.468607 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:59:32.468618 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:59:32.468629 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:59:32.468640 | orchestrator | 2025-09-03 00:59:32.468651 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-03 00:59:32.468661 | orchestrator | Wednesday 03 September 2025 00:57:31 +0000 (0:00:00.249) 0:00:37.355 *** 2025-09-03 00:59:32.468672 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 00:59:32.468683 | orchestrator | 2025-09-03 00:59:32.468694 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-03 00:59:32.468705 | orchestrator | Wednesday 03 September 2025 00:57:32 +0000 (0:00:00.632) 0:00:37.987 *** 2025-09-03 00:59:32.468723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-03 00:59:32.468749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-03 00:59:32.468762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-03 00:59:32.468775 | orchestrator | 2025-09-03 00:59:32.468786 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-03 00:59:32.468797 | orchestrator | Wednesday 03 September 2025 00:57:36 +0000 (0:00:04.277) 0:00:42.265 *** 2025-09-03 00:59:32.468831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-03 00:59:32.468844 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:59:32.468857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-03 00:59:32.468870 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:59:32.468895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-03 00:59:32.468916 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:59:32.468945 | orchestrator | 2025-09-03 00:59:32.468957 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-03 00:59:32.468968 | orchestrator | Wednesday 03 September 2025 00:57:40 +0000 (0:00:04.161) 0:00:46.426 *** 2025-09-03 00:59:32.468980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-03 00:59:32.468993 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:59:32.469012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-03 00:59:32.469034 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:59:32.469051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-03 00:59:32.469063 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:59:32.469075 | orchestrator | 2025-09-03 00:59:32.469085 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-03 00:59:32.469096 | orchestrator | Wednesday 03 September 2025 00:57:44 +0000 (0:00:03.867) 0:00:50.293 *** 2025-09-03 00:59:32.469108 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:59:32.469119 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:59:32.469130 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:59:32.469141 | orchestrator | 2025-09-03 00:59:32.469152 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-03 00:59:32.469163 | orchestrator | Wednesday 03 September 2025 00:57:48 +0000 (0:00:03.654) 0:00:53.947 *** 2025-09-03 00:59:32.469180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-03 00:59:32.469205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-03 00:59:32.469218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-03 00:59:32.469238 | orchestrator | 2025-09-03 00:59:32.469249 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-09-03 00:59:32.469260 | orchestrator | Wednesday 03 September 2025 00:57:52 +0000 (0:00:04.502) 0:00:58.449 *** 2025-09-03 00:59:32.469271 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:59:32.469283 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:59:32.469294 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:59:32.469305 | orchestrator | 2025-09-03 00:59:32.469316 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-09-03 00:59:32.469327 | orchestrator | Wednesday 03 September 2025 00:58:01 +0000 (0:00:08.591) 0:01:07.041 *** 2025-09-03 00:59:32.469338 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:59:32.469349 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:59:32.469361 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:59:32.469372 | orchestrator | 2025-09-03 00:59:32.469383 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-09-03 00:59:32.469399 | orchestrator | Wednesday 03 September 2025 00:58:06 +0000 (0:00:05.297) 0:01:12.338 *** 2025-09-03 00:59:32.469411 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:59:32.469422 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:59:32.469433 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:59:32.469444 | orchestrator | 2025-09-03 00:59:32.469455 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-09-03 00:59:32.469466 | orchestrator | Wednesday 03 September 2025 00:58:11 +0000 (0:00:04.759) 0:01:17.098 *** 2025-09-03 00:59:32.469477 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:59:32.469488 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:59:32.469499 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:59:32.469510 | orchestrator | 2025-09-03 00:59:32.469521 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-09-03 00:59:32.469541 | orchestrator | Wednesday 03 September 2025 00:58:14 +0000 (0:00:03.375) 0:01:20.473 *** 2025-09-03 00:59:32.469552 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:59:32.469563 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:59:32.469574 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:59:32.469585 | orchestrator | 2025-09-03 00:59:32.469595 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-09-03 00:59:32.469606 | orchestrator | Wednesday 03 September 2025 00:58:17 +0000 (0:00:02.899) 0:01:23.372 *** 2025-09-03 00:59:32.469617 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:59:32.469628 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:59:32.469639 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:59:32.469650 | orchestrator | 2025-09-03 00:59:32.469661 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-09-03 00:59:32.469672 | orchestrator | Wednesday 03 September 2025 00:58:17 +0000 (0:00:00.253) 0:01:23.625 *** 2025-09-03 00:59:32.469683 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-03 00:59:32.469694 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:59:32.469705 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-03 00:59:32.469716 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:59:32.469727 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-03 00:59:32.469744 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:59:32.469755 | orchestrator | 2025-09-03 00:59:32.469766 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-09-03 00:59:32.469777 | orchestrator | Wednesday 03 September 2025 00:58:20 +0000 (0:00:02.955) 0:01:26.581 *** 2025-09-03 00:59:32.469789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-03 00:59:32.469815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-03 00:59:32.469829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-03 00:59:32.469847 | orchestrator | 2025-09-03 00:59:32.469858 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-03 00:59:32.469869 | orchestrator | Wednesday 03 September 2025 00:58:24 +0000 (0:00:03.454) 0:01:30.035 *** 2025-09-03 00:59:32.469880 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:59:32.469891 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:59:32.469902 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:59:32.469913 | orchestrator | 2025-09-03 00:59:32.469924 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-09-03 00:59:32.469950 | orchestrator | Wednesday 03 September 2025 00:58:24 +0000 (0:00:00.259) 0:01:30.294 *** 2025-09-03 00:59:32.469961 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:59:32.469972 | orchestrator | 2025-09-03 00:59:32.469983 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-09-03 00:59:32.469994 | orchestrator | Wednesday 03 September 2025 00:58:26 +0000 (0:00:01.814) 0:01:32.108 *** 2025-09-03 00:59:32.470005 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:59:32.470090 | orchestrator | 2025-09-03 00:59:32.470106 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-09-03 00:59:32.470117 | orchestrator | Wednesday 03 September 2025 00:58:28 +0000 (0:00:01.719) 0:01:33.827 *** 2025-09-03 00:59:32.470128 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:59:32.470139 | orchestrator | 2025-09-03 00:59:32.470150 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-09-03 00:59:32.470161 | orchestrator | Wednesday 03 September 2025 00:58:29 +0000 (0:00:01.756) 0:01:35.584 *** 2025-09-03 00:59:32.470173 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:59:32.470184 | orchestrator | 2025-09-03 00:59:32.470194 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-09-03 00:59:32.470205 | orchestrator | Wednesday 03 September 2025 00:58:55 +0000 (0:00:25.403) 0:02:00.988 *** 2025-09-03 00:59:32.470216 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:59:32.470228 | orchestrator | 2025-09-03 00:59:32.470246 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-03 00:59:32.470258 | orchestrator | Wednesday 03 September 2025 00:58:57 +0000 (0:00:01.951) 0:02:02.939 *** 2025-09-03 00:59:32.470268 | orchestrator | 2025-09-03 00:59:32.470279 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-03 00:59:32.470290 | orchestrator | Wednesday 03 September 2025 00:58:57 +0000 (0:00:00.060) 0:02:03.000 *** 2025-09-03 00:59:32.470301 | orchestrator | 2025-09-03 00:59:32.470312 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-03 00:59:32.470331 | orchestrator | Wednesday 03 September 2025 00:58:57 +0000 (0:00:00.065) 0:02:03.066 *** 2025-09-03 00:59:32.470342 | orchestrator | 2025-09-03 00:59:32.470352 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-09-03 00:59:32.470369 | orchestrator | Wednesday 03 September 2025 00:58:57 +0000 (0:00:00.064) 0:02:03.130 *** 2025-09-03 00:59:32.470380 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:59:32.470391 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:59:32.470402 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:59:32.470413 | orchestrator | 2025-09-03 00:59:32.470424 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:59:32.470436 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-03 00:59:32.470449 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-03 00:59:32.470460 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-03 00:59:32.470471 | orchestrator | 2025-09-03 00:59:32.470481 | orchestrator | 2025-09-03 00:59:32.470492 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:59:32.470503 | orchestrator | Wednesday 03 September 2025 00:59:30 +0000 (0:00:33.302) 0:02:36.433 *** 2025-09-03 00:59:32.470514 | orchestrator | =============================================================================== 2025-09-03 00:59:32.470525 | orchestrator | glance : Restart glance-api container ---------------------------------- 33.30s 2025-09-03 00:59:32.470535 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 25.40s 2025-09-03 00:59:32.470546 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 8.59s 2025-09-03 00:59:32.470557 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.90s 2025-09-03 00:59:32.470568 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.30s 2025-09-03 00:59:32.470579 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.76s 2025-09-03 00:59:32.470590 | orchestrator | glance : Copying over config.json files for services -------------------- 4.50s 2025-09-03 00:59:32.470600 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.28s 2025-09-03 00:59:32.470611 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.16s 2025-09-03 00:59:32.470622 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.10s 2025-09-03 00:59:32.470633 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.10s 2025-09-03 00:59:32.470644 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.00s 2025-09-03 00:59:32.470654 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.97s 2025-09-03 00:59:32.470665 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.87s 2025-09-03 00:59:32.470676 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.65s 2025-09-03 00:59:32.470687 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.59s 2025-09-03 00:59:32.470697 | orchestrator | glance : Check glance containers ---------------------------------------- 3.45s 2025-09-03 00:59:32.470708 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.38s 2025-09-03 00:59:32.470719 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 2.96s 2025-09-03 00:59:32.470730 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 2.90s 2025-09-03 00:59:32.470741 | orchestrator | 2025-09-03 00:59:32 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:59:35.510140 | orchestrator | 2025-09-03 00:59:35 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:59:35.511130 | orchestrator | 2025-09-03 00:59:35 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:59:35.512154 | orchestrator | 2025-09-03 00:59:35 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:59:35.514461 | orchestrator | 2025-09-03 00:59:35 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 00:59:35.514486 | orchestrator | 2025-09-03 00:59:35 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:59:38.554779 | orchestrator | 2025-09-03 00:59:38 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:59:38.556632 | orchestrator | 2025-09-03 00:59:38 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:59:38.558157 | orchestrator | 2025-09-03 00:59:38 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:59:38.559191 | orchestrator | 2025-09-03 00:59:38 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 00:59:38.559221 | orchestrator | 2025-09-03 00:59:38 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:59:41.598820 | orchestrator | 2025-09-03 00:59:41 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:59:41.599958 | orchestrator | 2025-09-03 00:59:41 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:59:41.603029 | orchestrator | 2025-09-03 00:59:41 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:59:41.604396 | orchestrator | 2025-09-03 00:59:41 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 00:59:41.604590 | orchestrator | 2025-09-03 00:59:41 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:59:44.652644 | orchestrator | 2025-09-03 00:59:44 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:59:44.653480 | orchestrator | 2025-09-03 00:59:44 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:59:44.655253 | orchestrator | 2025-09-03 00:59:44 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:59:44.656657 | orchestrator | 2025-09-03 00:59:44 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 00:59:44.656687 | orchestrator | 2025-09-03 00:59:44 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:59:47.705355 | orchestrator | 2025-09-03 00:59:47 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:59:47.705466 | orchestrator | 2025-09-03 00:59:47 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:59:47.705795 | orchestrator | 2025-09-03 00:59:47 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:59:47.706632 | orchestrator | 2025-09-03 00:59:47 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 00:59:47.706655 | orchestrator | 2025-09-03 00:59:47 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:59:50.749710 | orchestrator | 2025-09-03 00:59:50 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:59:50.751635 | orchestrator | 2025-09-03 00:59:50 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:59:50.753137 | orchestrator | 2025-09-03 00:59:50 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:59:50.755072 | orchestrator | 2025-09-03 00:59:50 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 00:59:50.755375 | orchestrator | 2025-09-03 00:59:50 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:59:53.797307 | orchestrator | 2025-09-03 00:59:53 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:59:53.798818 | orchestrator | 2025-09-03 00:59:53 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:59:53.800124 | orchestrator | 2025-09-03 00:59:53 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:59:53.801702 | orchestrator | 2025-09-03 00:59:53 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 00:59:53.801787 | orchestrator | 2025-09-03 00:59:53 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:59:56.847516 | orchestrator | 2025-09-03 00:59:56 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:59:56.849089 | orchestrator | 2025-09-03 00:59:56 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:59:56.850823 | orchestrator | 2025-09-03 00:59:56 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state STARTED 2025-09-03 00:59:56.852324 | orchestrator | 2025-09-03 00:59:56 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 00:59:56.853016 | orchestrator | 2025-09-03 00:59:56 | INFO  | Wait 1 second(s) until the next check 2025-09-03 00:59:59.902528 | orchestrator | 2025-09-03 00:59:59 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 00:59:59.903752 | orchestrator | 2025-09-03 00:59:59 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 00:59:59.908418 | orchestrator | 2025-09-03 00:59:59 | INFO  | Task a691a2f3-7f2a-460d-8b50-ec5d5d34d854 is in state SUCCESS 2025-09-03 00:59:59.910261 | orchestrator | 2025-09-03 00:59:59.910299 | orchestrator | 2025-09-03 00:59:59.910312 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 00:59:59.910324 | orchestrator | 2025-09-03 00:59:59.910335 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 00:59:59.910347 | orchestrator | Wednesday 03 September 2025 00:56:47 +0000 (0:00:00.329) 0:00:00.329 *** 2025-09-03 00:59:59.910358 | orchestrator | ok: [testbed-manager] 2025-09-03 00:59:59.910375 | orchestrator | ok: [testbed-node-0] 2025-09-03 00:59:59.910446 | orchestrator | ok: [testbed-node-1] 2025-09-03 00:59:59.910458 | orchestrator | ok: [testbed-node-2] 2025-09-03 00:59:59.910469 | orchestrator | ok: [testbed-node-3] 2025-09-03 00:59:59.910480 | orchestrator | ok: [testbed-node-4] 2025-09-03 00:59:59.910492 | orchestrator | ok: [testbed-node-5] 2025-09-03 00:59:59.910554 | orchestrator | 2025-09-03 00:59:59.910586 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 00:59:59.910598 | orchestrator | Wednesday 03 September 2025 00:56:48 +0000 (0:00:00.852) 0:00:01.181 *** 2025-09-03 00:59:59.910610 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-03 00:59:59.910621 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-03 00:59:59.910632 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-03 00:59:59.910642 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-03 00:59:59.910653 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-03 00:59:59.910664 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-03 00:59:59.910675 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-03 00:59:59.910685 | orchestrator | 2025-09-03 00:59:59.910696 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-03 00:59:59.910707 | orchestrator | 2025-09-03 00:59:59.910718 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-03 00:59:59.910777 | orchestrator | Wednesday 03 September 2025 00:56:49 +0000 (0:00:00.690) 0:00:01.871 *** 2025-09-03 00:59:59.910792 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:59:59.910805 | orchestrator | 2025-09-03 00:59:59.910844 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-03 00:59:59.910856 | orchestrator | Wednesday 03 September 2025 00:56:50 +0000 (0:00:01.402) 0:00:03.274 *** 2025-09-03 00:59:59.910870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.910886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.910901 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-03 00:59:59.910915 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.910965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.910986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.911000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.911023 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.911036 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.911050 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.911064 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.911077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.911101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.911120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.911140 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.911155 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-03 00:59:59.911174 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.911187 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.911202 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.911223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.911243 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.911261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.911273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.911284 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.911296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.911307 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.911318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.911336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.911358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.911370 | orchestrator | 2025-09-03 00:59:59.911382 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-03 00:59:59.911393 | orchestrator | Wednesday 03 September 2025 00:56:54 +0000 (0:00:03.482) 0:00:06.757 *** 2025-09-03 00:59:59.911405 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 00:59:59.911416 | orchestrator | 2025-09-03 00:59:59.911427 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-03 00:59:59.911438 | orchestrator | Wednesday 03 September 2025 00:56:55 +0000 (0:00:01.397) 0:00:08.154 *** 2025-09-03 00:59:59.911450 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-03 00:59:59.911462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.911473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.911693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.911732 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.911772 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.911792 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.911812 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.911831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.911843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.911855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.911866 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.911886 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.911918 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.911964 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.911977 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-03 00:59:59.911990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.912002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.912014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.912041 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.912059 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.912071 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.912082 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.912094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.912105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.912117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.912128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.913784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.913832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.913846 | orchestrator | 2025-09-03 00:59:59.913857 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-03 00:59:59.913869 | orchestrator | Wednesday 03 September 2025 00:57:01 +0000 (0:00:06.060) 0:00:14.215 *** 2025-09-03 00:59:59.913881 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-03 00:59:59.913893 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-03 00:59:59.913905 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-03 00:59:59.913917 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-03 00:59:59.913978 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:59:59.913997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-03 00:59:59.914009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:59:59.914072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:59:59.914094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-03 00:59:59.914114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-03 00:59:59.914133 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:59:59.914146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:59:59.914167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:59:59.914187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:59:59.914205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-03 00:59:59.914217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:59:59.914228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-03 00:59:59.914240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:59:59.914251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:59:59.914269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-03 00:59:59.914281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:59:59.914292 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:59:59.914304 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:59:59.914315 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:59:59.914334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-03 00:59:59.914351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-03 00:59:59.914363 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-03 00:59:59.914375 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:59:59.914386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-03 00:59:59.914398 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-03 00:59:59.914416 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-03 00:59:59.914427 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:59:59.914439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-03 00:59:59.914452 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-03 00:59:59.914481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-03 00:59:59.914500 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:59:59.914518 | orchestrator | 2025-09-03 00:59:59.914544 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-03 00:59:59.914563 | orchestrator | Wednesday 03 September 2025 00:57:02 +0000 (0:00:01.173) 0:00:15.389 *** 2025-09-03 00:59:59.914580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-03 00:59:59.914592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:59:59.914604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:59:59.914627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-03 00:59:59.914639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:59:59.914650 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:59:59.914661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-03 00:59:59.914679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:59:59.914696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:59:59.914708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-03 00:59:59.914720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:59:59.914731 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-03 00:59:59.914754 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-03 00:59:59.914766 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-03 00:59:59.914783 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-03 00:59:59.914802 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:59:59.914814 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:59:59.914825 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:59:59.914837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-03 00:59:59.914854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:59:59.914866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:59:59.914878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-03 00:59:59.914889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-03 00:59:59.914901 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:59:59.914917 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-03 00:59:59.914954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-03 00:59:59.914967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-03 00:59:59.914978 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:59:59.914990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-03 00:59:59.915008 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-03 00:59:59.915019 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-03 00:59:59.915030 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:59:59.915042 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-03 00:59:59.915053 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-03 00:59:59.915071 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-03 00:59:59.915083 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:59:59.915094 | orchestrator | 2025-09-03 00:59:59.915106 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-03 00:59:59.915117 | orchestrator | Wednesday 03 September 2025 00:57:04 +0000 (0:00:01.563) 0:00:16.952 *** 2025-09-03 00:59:59.915134 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-03 00:59:59.915152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.915163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.915178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.915197 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.915215 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.915241 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.915268 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.915287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.915318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.915337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.915349 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.915361 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.915373 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.915391 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.915408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.915427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.915438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.915450 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.915461 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-03 00:59:59.915474 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.915491 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.915508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.915527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.915539 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.915550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.915562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.915573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.915584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.915596 | orchestrator | 2025-09-03 00:59:59.915607 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-03 00:59:59.915618 | orchestrator | Wednesday 03 September 2025 00:57:09 +0000 (0:00:05.098) 0:00:22.050 *** 2025-09-03 00:59:59.915630 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-03 00:59:59.915641 | orchestrator | 2025-09-03 00:59:59.915653 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-03 00:59:59.915676 | orchestrator | Wednesday 03 September 2025 00:57:10 +0000 (0:00:00.871) 0:00:22.922 *** 2025-09-03 00:59:59.915694 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1053808, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8237617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.915753 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1053808, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8237617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.915768 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1053808, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8237617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.915789 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1053842, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8274088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.915809 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1053842, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8274088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.915829 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1053808, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8237617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.915853 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1053842, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8274088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.915883 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1053808, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8237617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:59:59.915895 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1053808, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8237617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.915907 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1053842, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8274088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.915918 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1053808, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8237617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.915987 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1053795, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8231227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916000 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1053795, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8231227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916018 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1053795, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8231227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916043 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1053795, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8231227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916055 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1053828, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8257744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916067 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1053828, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8257744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916078 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1053842, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8274088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916088 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1053842, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8274088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916098 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1053828, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8257744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916119 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1053790, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8210785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916134 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1053795, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8231227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916144 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1053828, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8257744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916154 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1053790, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8210785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916165 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1053811, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8239663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916175 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1053795, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8231227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916185 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1053790, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8210785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916201 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1053828, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8257744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916344 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1053842, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8274088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:59:59.916359 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1053811, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8239663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916369 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1053790, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8210785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916379 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1053811, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8239663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916389 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1053828, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8257744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916399 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1053825, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8257744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916417 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1053825, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8257744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916458 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1053790, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8210785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916470 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1053790, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8210785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916481 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1053825, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8257744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916491 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1053811, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8239663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916501 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1053811, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8239663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916511 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1053814, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8244095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916528 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1053814, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8244095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916564 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1053825, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8257744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916580 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1053811, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8239663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916591 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1053806, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8233888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916602 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1053814, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8244095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916612 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1053814, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8244095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916628 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1053825, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8257744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916638 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1053839, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8270745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916675 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1053825, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8257744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916692 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1053806, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8233888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916702 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1053814, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8244095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916712 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1053806, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8233888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916722 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1053839, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8270745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916738 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1053806, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8233888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916749 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1053806, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8233888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916784 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1053782, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.819857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916803 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1053795, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8231227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:59:59.916813 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1053814, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8244095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916824 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1053782, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.819857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916834 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1053861, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8292327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916852 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1053839, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8270745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916862 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1053839, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8270745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916897 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1053839, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8270745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916913 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1053782, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.819857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916924 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1053806, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8233888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916958 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1053782, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.819857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916976 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1053833, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8268564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.916988 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1053782, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.819857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917000 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1053861, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8292327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917042 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1053861, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8292327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917059 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1053861, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8292327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917071 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1053833, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8268564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917082 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1053839, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8270745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917100 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1053833, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8268564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917111 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1053833, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8268564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917122 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1053794, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8215036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917141 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1053861, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8292327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917158 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1053794, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8215036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917170 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1053794, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8215036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917182 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1053794, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8215036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917201 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1053784, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.820046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917212 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1053784, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.820046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917224 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1053828, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8257744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:59:59.917242 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1053833, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8268564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917259 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1053784, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.820046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917271 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1053782, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.819857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917288 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1053784, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.820046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917300 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1053821, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8251705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917310 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1053821, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8251705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917320 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1053861, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8292327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917337 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1053821, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8251705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917352 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1053794, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8215036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917362 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1053818, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8248827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917378 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1053818, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8248827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917388 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1053859, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8292327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917398 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:59:59.917409 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1053821, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8251705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917419 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1053790, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8210785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:59:59.917434 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1053833, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8268564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917448 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1053794, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8215036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917459 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1053784, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.820046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917474 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1053818, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8248827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917484 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1053859, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8292327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917494 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:59:59.917504 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1053818, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8248827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917514 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1053859, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8292327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917524 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:59:59.917538 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1053784, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.820046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917553 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1053859, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8292327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917572 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:59:59.917582 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1053821, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8251705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917592 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1053821, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8251705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917602 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1053818, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8248827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917612 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1053818, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8248827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917622 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1053859, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8292327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917632 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:59:59.917646 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1053859, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8292327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-03 00:59:59.917657 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:59:59.917671 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1053811, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8239663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:59:59.917687 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1053825, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8257744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:59:59.917697 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1053814, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8244095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:59:59.917708 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1053806, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8233888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:59:59.917718 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1053839, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8270745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:59:59.917728 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1053782, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.819857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:59:59.917742 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1053861, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8292327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:59:59.917763 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1053833, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8268564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:59:59.917773 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1053794, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8215036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:59:59.917783 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1053784, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.820046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:59:59.917793 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1053821, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8251705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:59:59.917803 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1053818, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8248827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:59:59.917813 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1053859, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8292327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-03 00:59:59.917823 | orchestrator | 2025-09-03 00:59:59.917833 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-03 00:59:59.917843 | orchestrator | Wednesday 03 September 2025 00:57:35 +0000 (0:00:24.607) 0:00:47.529 *** 2025-09-03 00:59:59.917853 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-03 00:59:59.917863 | orchestrator | 2025-09-03 00:59:59.917877 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-03 00:59:59.917893 | orchestrator | Wednesday 03 September 2025 00:57:35 +0000 (0:00:00.627) 0:00:48.157 *** 2025-09-03 00:59:59.917903 | orchestrator | [WARNING]: Skipped 2025-09-03 00:59:59.917913 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-03 00:59:59.917923 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-03 00:59:59.917947 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-03 00:59:59.917957 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-03 00:59:59.917967 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-03 00:59:59.917982 | orchestrator | [WARNING]: Skipped 2025-09-03 00:59:59.917992 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-03 00:59:59.918002 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-03 00:59:59.918011 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-03 00:59:59.918050 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-03 00:59:59.918061 | orchestrator | [WARNING]: Skipped 2025-09-03 00:59:59.918071 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-03 00:59:59.918080 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-03 00:59:59.918090 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-03 00:59:59.918100 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-03 00:59:59.918109 | orchestrator | [WARNING]: Skipped 2025-09-03 00:59:59.918119 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-03 00:59:59.918129 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-03 00:59:59.918139 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-03 00:59:59.918148 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-03 00:59:59.918158 | orchestrator | [WARNING]: Skipped 2025-09-03 00:59:59.918168 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-03 00:59:59.918177 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-03 00:59:59.918187 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-03 00:59:59.918196 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-03 00:59:59.918206 | orchestrator | [WARNING]: Skipped 2025-09-03 00:59:59.918216 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-03 00:59:59.918225 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-03 00:59:59.918235 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-03 00:59:59.918245 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-03 00:59:59.918254 | orchestrator | [WARNING]: Skipped 2025-09-03 00:59:59.918264 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-03 00:59:59.918274 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-03 00:59:59.918283 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-03 00:59:59.918293 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-03 00:59:59.918303 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-03 00:59:59.918313 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-03 00:59:59.918323 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-03 00:59:59.918333 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-03 00:59:59.918343 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-03 00:59:59.918352 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-03 00:59:59.918362 | orchestrator | 2025-09-03 00:59:59.918372 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-03 00:59:59.918388 | orchestrator | Wednesday 03 September 2025 00:57:37 +0000 (0:00:02.036) 0:00:50.194 *** 2025-09-03 00:59:59.918398 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-03 00:59:59.918408 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-03 00:59:59.918418 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:59:59.918428 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:59:59.918437 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-03 00:59:59.918447 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:59:59.918457 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-03 00:59:59.918466 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-03 00:59:59.918476 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:59:59.918486 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:59:59.918496 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-03 00:59:59.918505 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:59:59.918515 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-09-03 00:59:59.918525 | orchestrator | 2025-09-03 00:59:59.918534 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-03 00:59:59.918544 | orchestrator | Wednesday 03 September 2025 00:57:54 +0000 (0:00:17.029) 0:01:07.223 *** 2025-09-03 00:59:59.918560 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-03 00:59:59.918570 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:59:59.918580 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-03 00:59:59.918589 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:59:59.918599 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-03 00:59:59.918609 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:59:59.918618 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-03 00:59:59.918628 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:59:59.918643 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-03 00:59:59.918653 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:59:59.918663 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-03 00:59:59.918672 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:59:59.918682 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-09-03 00:59:59.918691 | orchestrator | 2025-09-03 00:59:59.918701 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-03 00:59:59.918711 | orchestrator | Wednesday 03 September 2025 00:57:59 +0000 (0:00:05.067) 0:01:12.290 *** 2025-09-03 00:59:59.918721 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-03 00:59:59.918731 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:59:59.918741 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-03 00:59:59.918751 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-03 00:59:59.918761 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:59:59.918770 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:59:59.918780 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-03 00:59:59.918796 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:59:59.918807 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-09-03 00:59:59.918816 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-03 00:59:59.918826 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:59:59.918836 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-03 00:59:59.918846 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:59:59.918855 | orchestrator | 2025-09-03 00:59:59.918865 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-03 00:59:59.918875 | orchestrator | Wednesday 03 September 2025 00:58:02 +0000 (0:00:02.703) 0:01:14.993 *** 2025-09-03 00:59:59.918885 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-03 00:59:59.918895 | orchestrator | 2025-09-03 00:59:59.918905 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-03 00:59:59.918914 | orchestrator | Wednesday 03 September 2025 00:58:03 +0000 (0:00:01.219) 0:01:16.212 *** 2025-09-03 00:59:59.918966 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:59:59.918979 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:59:59.918989 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:59:59.918999 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:59:59.919008 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:59:59.919018 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:59:59.919027 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:59:59.919037 | orchestrator | 2025-09-03 00:59:59.919046 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-03 00:59:59.919055 | orchestrator | Wednesday 03 September 2025 00:58:04 +0000 (0:00:00.691) 0:01:16.904 *** 2025-09-03 00:59:59.919063 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:59:59.919071 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:59:59.919079 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:59:59.919086 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:59:59.919094 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:59:59.919102 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:59:59.919110 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:59:59.919118 | orchestrator | 2025-09-03 00:59:59.919125 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-03 00:59:59.919133 | orchestrator | Wednesday 03 September 2025 00:58:06 +0000 (0:00:02.481) 0:01:19.385 *** 2025-09-03 00:59:59.919141 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-03 00:59:59.919149 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-03 00:59:59.919157 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:59:59.919165 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:59:59.919173 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-03 00:59:59.919181 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:59:59.919188 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-03 00:59:59.919197 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:59:59.919209 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-03 00:59:59.919217 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:59:59.919225 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-03 00:59:59.919233 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:59:59.919241 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-03 00:59:59.919256 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:59:59.919264 | orchestrator | 2025-09-03 00:59:59.919272 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-03 00:59:59.919280 | orchestrator | Wednesday 03 September 2025 00:58:09 +0000 (0:00:02.048) 0:01:21.434 *** 2025-09-03 00:59:59.919291 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-03 00:59:59.919300 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:59:59.919308 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-03 00:59:59.919316 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:59:59.919324 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-03 00:59:59.919332 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:59:59.919339 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-03 00:59:59.919347 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:59:59.919355 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-03 00:59:59.919363 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:59:59.919371 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-03 00:59:59.919379 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:59:59.919386 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-09-03 00:59:59.919394 | orchestrator | 2025-09-03 00:59:59.919402 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-03 00:59:59.919410 | orchestrator | Wednesday 03 September 2025 00:58:10 +0000 (0:00:01.963) 0:01:23.398 *** 2025-09-03 00:59:59.919418 | orchestrator | [WARNING]: Skipped 2025-09-03 00:59:59.919426 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-09-03 00:59:59.919434 | orchestrator | due to this access issue: 2025-09-03 00:59:59.919442 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-09-03 00:59:59.919450 | orchestrator | not a directory 2025-09-03 00:59:59.919458 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-03 00:59:59.919465 | orchestrator | 2025-09-03 00:59:59.919473 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-03 00:59:59.919481 | orchestrator | Wednesday 03 September 2025 00:58:12 +0000 (0:00:01.181) 0:01:24.579 *** 2025-09-03 00:59:59.919489 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:59:59.919497 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:59:59.919505 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:59:59.919513 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:59:59.919521 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:59:59.919528 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:59:59.919536 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:59:59.919544 | orchestrator | 2025-09-03 00:59:59.919552 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-03 00:59:59.919560 | orchestrator | Wednesday 03 September 2025 00:58:12 +0000 (0:00:00.786) 0:01:25.365 *** 2025-09-03 00:59:59.919568 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:59:59.919576 | orchestrator | skipping: [testbed-node-0] 2025-09-03 00:59:59.919583 | orchestrator | skipping: [testbed-node-1] 2025-09-03 00:59:59.919591 | orchestrator | skipping: [testbed-node-2] 2025-09-03 00:59:59.919599 | orchestrator | skipping: [testbed-node-3] 2025-09-03 00:59:59.919607 | orchestrator | skipping: [testbed-node-4] 2025-09-03 00:59:59.919615 | orchestrator | skipping: [testbed-node-5] 2025-09-03 00:59:59.919623 | orchestrator | 2025-09-03 00:59:59.919630 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-03 00:59:59.919644 | orchestrator | Wednesday 03 September 2025 00:58:13 +0000 (0:00:00.627) 0:01:25.992 *** 2025-09-03 00:59:59.919653 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-03 00:59:59.919666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.919679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.919688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.919696 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.919704 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.919712 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.919726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.919735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.919747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.919762 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-03 00:59:59.919771 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.919779 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.919788 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.919796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.919811 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.919819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.919831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.919843 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.919852 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-03 00:59:59.919862 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.919870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.919883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.919892 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.919904 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.919916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-03 00:59:59.919936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.919945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.919953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-03 00:59:59.919966 | orchestrator | 2025-09-03 00:59:59.919974 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-03 00:59:59.919982 | orchestrator | Wednesday 03 September 2025 00:58:17 +0000 (0:00:03.892) 0:01:29.885 *** 2025-09-03 00:59:59.919990 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-03 00:59:59.919998 | orchestrator | skipping: [testbed-manager] 2025-09-03 00:59:59.920006 | orchestrator | 2025-09-03 00:59:59.920014 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-03 00:59:59.920022 | orchestrator | Wednesday 03 September 2025 00:58:18 +0000 (0:00:00.888) 0:01:30.774 *** 2025-09-03 00:59:59.920029 | orchestrator | 2025-09-03 00:59:59.920037 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-03 00:59:59.920045 | orchestrator | Wednesday 03 September 2025 00:58:18 +0000 (0:00:00.101) 0:01:30.876 *** 2025-09-03 00:59:59.920053 | orchestrator | 2025-09-03 00:59:59.920061 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-03 00:59:59.920069 | orchestrator | Wednesday 03 September 2025 00:58:18 +0000 (0:00:00.117) 0:01:30.993 *** 2025-09-03 00:59:59.920076 | orchestrator | 2025-09-03 00:59:59.920084 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-03 00:59:59.920092 | orchestrator | Wednesday 03 September 2025 00:58:18 +0000 (0:00:00.060) 0:01:31.054 *** 2025-09-03 00:59:59.920100 | orchestrator | 2025-09-03 00:59:59.920107 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-03 00:59:59.920115 | orchestrator | Wednesday 03 September 2025 00:58:18 +0000 (0:00:00.264) 0:01:31.318 *** 2025-09-03 00:59:59.920123 | orchestrator | 2025-09-03 00:59:59.920131 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-03 00:59:59.920138 | orchestrator | Wednesday 03 September 2025 00:58:19 +0000 (0:00:00.120) 0:01:31.439 *** 2025-09-03 00:59:59.920146 | orchestrator | 2025-09-03 00:59:59.920154 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-03 00:59:59.920162 | orchestrator | Wednesday 03 September 2025 00:58:19 +0000 (0:00:00.103) 0:01:31.543 *** 2025-09-03 00:59:59.920170 | orchestrator | 2025-09-03 00:59:59.920177 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-09-03 00:59:59.920185 | orchestrator | Wednesday 03 September 2025 00:58:19 +0000 (0:00:00.105) 0:01:31.648 *** 2025-09-03 00:59:59.920193 | orchestrator | changed: [testbed-manager] 2025-09-03 00:59:59.920201 | orchestrator | 2025-09-03 00:59:59.920209 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-03 00:59:59.920220 | orchestrator | Wednesday 03 September 2025 00:58:36 +0000 (0:00:17.374) 0:01:49.022 *** 2025-09-03 00:59:59.920228 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:59:59.920236 | orchestrator | changed: [testbed-manager] 2025-09-03 00:59:59.920244 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:59:59.920252 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:59:59.920260 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:59:59.920268 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:59:59.920276 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:59:59.920284 | orchestrator | 2025-09-03 00:59:59.920292 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-03 00:59:59.920299 | orchestrator | Wednesday 03 September 2025 00:58:49 +0000 (0:00:13.057) 0:02:02.080 *** 2025-09-03 00:59:59.920307 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:59:59.920316 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:59:59.920323 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:59:59.920331 | orchestrator | 2025-09-03 00:59:59.920339 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-03 00:59:59.920347 | orchestrator | Wednesday 03 September 2025 00:58:55 +0000 (0:00:05.978) 0:02:08.058 *** 2025-09-03 00:59:59.920360 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:59:59.920368 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:59:59.920376 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:59:59.920384 | orchestrator | 2025-09-03 00:59:59.920392 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-03 00:59:59.920400 | orchestrator | Wednesday 03 September 2025 00:59:05 +0000 (0:00:10.235) 0:02:18.294 *** 2025-09-03 00:59:59.920408 | orchestrator | changed: [testbed-manager] 2025-09-03 00:59:59.920415 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:59:59.920423 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:59:59.920431 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:59:59.920439 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:59:59.920447 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:59:59.920455 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:59:59.920463 | orchestrator | 2025-09-03 00:59:59.920471 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-09-03 00:59:59.920479 | orchestrator | Wednesday 03 September 2025 00:59:24 +0000 (0:00:18.168) 0:02:36.462 *** 2025-09-03 00:59:59.920487 | orchestrator | changed: [testbed-manager] 2025-09-03 00:59:59.920495 | orchestrator | 2025-09-03 00:59:59.920503 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-03 00:59:59.920511 | orchestrator | Wednesday 03 September 2025 00:59:31 +0000 (0:00:07.894) 0:02:44.357 *** 2025-09-03 00:59:59.920518 | orchestrator | changed: [testbed-node-0] 2025-09-03 00:59:59.920526 | orchestrator | changed: [testbed-node-1] 2025-09-03 00:59:59.920534 | orchestrator | changed: [testbed-node-2] 2025-09-03 00:59:59.920542 | orchestrator | 2025-09-03 00:59:59.920551 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-09-03 00:59:59.920559 | orchestrator | Wednesday 03 September 2025 00:59:41 +0000 (0:00:09.904) 0:02:54.262 *** 2025-09-03 00:59:59.920566 | orchestrator | changed: [testbed-manager] 2025-09-03 00:59:59.920574 | orchestrator | 2025-09-03 00:59:59.920582 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-03 00:59:59.920618 | orchestrator | Wednesday 03 September 2025 00:59:46 +0000 (0:00:05.082) 0:02:59.344 *** 2025-09-03 00:59:59.920627 | orchestrator | changed: [testbed-node-3] 2025-09-03 00:59:59.920635 | orchestrator | changed: [testbed-node-4] 2025-09-03 00:59:59.920643 | orchestrator | changed: [testbed-node-5] 2025-09-03 00:59:59.920651 | orchestrator | 2025-09-03 00:59:59.920659 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 00:59:59.920667 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-03 00:59:59.920675 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-03 00:59:59.920683 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-03 00:59:59.920691 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-03 00:59:59.920699 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-03 00:59:59.920707 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-03 00:59:59.920715 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-03 00:59:59.920723 | orchestrator | 2025-09-03 00:59:59.920731 | orchestrator | 2025-09-03 00:59:59.920739 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 00:59:59.920752 | orchestrator | Wednesday 03 September 2025 00:59:58 +0000 (0:00:11.724) 0:03:11.069 *** 2025-09-03 00:59:59.920760 | orchestrator | =============================================================================== 2025-09-03 00:59:59.920768 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 24.61s 2025-09-03 00:59:59.920776 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 18.17s 2025-09-03 00:59:59.920784 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 17.37s 2025-09-03 00:59:59.920792 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.03s 2025-09-03 00:59:59.920800 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.06s 2025-09-03 00:59:59.920812 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.72s 2025-09-03 00:59:59.920820 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.24s 2025-09-03 00:59:59.920828 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.90s 2025-09-03 00:59:59.920836 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.89s 2025-09-03 00:59:59.920844 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.06s 2025-09-03 00:59:59.920852 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.98s 2025-09-03 00:59:59.920863 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.10s 2025-09-03 00:59:59.920871 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.08s 2025-09-03 00:59:59.920879 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 5.07s 2025-09-03 00:59:59.920887 | orchestrator | prometheus : Check prometheus containers -------------------------------- 3.89s 2025-09-03 00:59:59.920895 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.48s 2025-09-03 00:59:59.920903 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.70s 2025-09-03 00:59:59.920911 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.48s 2025-09-03 00:59:59.920919 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.05s 2025-09-03 00:59:59.920940 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.04s 2025-09-03 00:59:59.920949 | orchestrator | 2025-09-03 00:59:59 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 00:59:59.920957 | orchestrator | 2025-09-03 00:59:59 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:00:02.965845 | orchestrator | 2025-09-03 01:00:02 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 01:00:02.967000 | orchestrator | 2025-09-03 01:00:02 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:00:02.968351 | orchestrator | 2025-09-03 01:00:02 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:00:02.969662 | orchestrator | 2025-09-03 01:00:02 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:00:02.970235 | orchestrator | 2025-09-03 01:00:02 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:00:06.033870 | orchestrator | 2025-09-03 01:00:06 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 01:00:06.036785 | orchestrator | 2025-09-03 01:00:06 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:00:06.039485 | orchestrator | 2025-09-03 01:00:06 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:00:06.042525 | orchestrator | 2025-09-03 01:00:06 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:00:06.042694 | orchestrator | 2025-09-03 01:00:06 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:00:09.087888 | orchestrator | 2025-09-03 01:00:09 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 01:00:09.089281 | orchestrator | 2025-09-03 01:00:09 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:00:09.091425 | orchestrator | 2025-09-03 01:00:09 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:00:09.092538 | orchestrator | 2025-09-03 01:00:09 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:00:09.092718 | orchestrator | 2025-09-03 01:00:09 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:00:12.134759 | orchestrator | 2025-09-03 01:00:12 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 01:00:12.138206 | orchestrator | 2025-09-03 01:00:12 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:00:12.139419 | orchestrator | 2025-09-03 01:00:12 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:00:12.142246 | orchestrator | 2025-09-03 01:00:12 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:00:12.142272 | orchestrator | 2025-09-03 01:00:12 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:00:15.189404 | orchestrator | 2025-09-03 01:00:15 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 01:00:15.189614 | orchestrator | 2025-09-03 01:00:15 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:00:15.189647 | orchestrator | 2025-09-03 01:00:15 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:00:15.190595 | orchestrator | 2025-09-03 01:00:15 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:00:15.190620 | orchestrator | 2025-09-03 01:00:15 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:00:18.234562 | orchestrator | 2025-09-03 01:00:18 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 01:00:18.237206 | orchestrator | 2025-09-03 01:00:18 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:00:18.239491 | orchestrator | 2025-09-03 01:00:18 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:00:18.241647 | orchestrator | 2025-09-03 01:00:18 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:00:18.241681 | orchestrator | 2025-09-03 01:00:18 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:00:21.283470 | orchestrator | 2025-09-03 01:00:21 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 01:00:21.284620 | orchestrator | 2025-09-03 01:00:21 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:00:21.286012 | orchestrator | 2025-09-03 01:00:21 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:00:21.287811 | orchestrator | 2025-09-03 01:00:21 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:00:21.287834 | orchestrator | 2025-09-03 01:00:21 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:00:24.325736 | orchestrator | 2025-09-03 01:00:24 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 01:00:24.326179 | orchestrator | 2025-09-03 01:00:24 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:00:24.328455 | orchestrator | 2025-09-03 01:00:24 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:00:24.329127 | orchestrator | 2025-09-03 01:00:24 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:00:24.329192 | orchestrator | 2025-09-03 01:00:24 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:00:27.371404 | orchestrator | 2025-09-03 01:00:27 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 01:00:27.371503 | orchestrator | 2025-09-03 01:00:27 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:00:27.371518 | orchestrator | 2025-09-03 01:00:27 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:00:27.372069 | orchestrator | 2025-09-03 01:00:27 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:00:27.372095 | orchestrator | 2025-09-03 01:00:27 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:00:30.399701 | orchestrator | 2025-09-03 01:00:30 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 01:00:30.399815 | orchestrator | 2025-09-03 01:00:30 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:00:30.399830 | orchestrator | 2025-09-03 01:00:30 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:00:30.401095 | orchestrator | 2025-09-03 01:00:30 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:00:30.401125 | orchestrator | 2025-09-03 01:00:30 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:00:33.434367 | orchestrator | 2025-09-03 01:00:33 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 01:00:33.434837 | orchestrator | 2025-09-03 01:00:33 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:00:33.436217 | orchestrator | 2025-09-03 01:00:33 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:00:33.437227 | orchestrator | 2025-09-03 01:00:33 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:00:33.437250 | orchestrator | 2025-09-03 01:00:33 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:00:36.473527 | orchestrator | 2025-09-03 01:00:36 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state STARTED 2025-09-03 01:00:36.473622 | orchestrator | 2025-09-03 01:00:36 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:00:36.473635 | orchestrator | 2025-09-03 01:00:36 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:00:36.473646 | orchestrator | 2025-09-03 01:00:36 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:00:36.473656 | orchestrator | 2025-09-03 01:00:36 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:00:39.486478 | orchestrator | 2025-09-03 01:00:39 | INFO  | Task f53a3ec7-70f2-490b-90d0-e98642e9d075 is in state SUCCESS 2025-09-03 01:00:39.487327 | orchestrator | 2025-09-03 01:00:39.487358 | orchestrator | 2025-09-03 01:00:39.487370 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 01:00:39.487380 | orchestrator | 2025-09-03 01:00:39.487388 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 01:00:39.487398 | orchestrator | Wednesday 03 September 2025 00:56:58 +0000 (0:00:00.195) 0:00:00.195 *** 2025-09-03 01:00:39.487427 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:00:39.487438 | orchestrator | ok: [testbed-node-1] 2025-09-03 01:00:39.487447 | orchestrator | ok: [testbed-node-2] 2025-09-03 01:00:39.487455 | orchestrator | ok: [testbed-node-3] 2025-09-03 01:00:39.487463 | orchestrator | ok: [testbed-node-4] 2025-09-03 01:00:39.487471 | orchestrator | ok: [testbed-node-5] 2025-09-03 01:00:39.487480 | orchestrator | 2025-09-03 01:00:39.487513 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 01:00:39.487522 | orchestrator | Wednesday 03 September 2025 00:56:59 +0000 (0:00:00.614) 0:00:00.810 *** 2025-09-03 01:00:39.487530 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-03 01:00:39.487539 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-03 01:00:39.487547 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-03 01:00:39.487555 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-03 01:00:39.487563 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-03 01:00:39.487571 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-03 01:00:39.487579 | orchestrator | 2025-09-03 01:00:39.487587 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-03 01:00:39.487595 | orchestrator | 2025-09-03 01:00:39.487603 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-03 01:00:39.487612 | orchestrator | Wednesday 03 September 2025 00:56:59 +0000 (0:00:00.507) 0:00:01.318 *** 2025-09-03 01:00:39.487620 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 01:00:39.487630 | orchestrator | 2025-09-03 01:00:39.487639 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-03 01:00:39.487647 | orchestrator | Wednesday 03 September 2025 00:57:00 +0000 (0:00:00.869) 0:00:02.187 *** 2025-09-03 01:00:39.487655 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-03 01:00:39.487663 | orchestrator | 2025-09-03 01:00:39.487671 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-03 01:00:39.487679 | orchestrator | Wednesday 03 September 2025 00:57:03 +0000 (0:00:03.073) 0:00:05.260 *** 2025-09-03 01:00:39.487687 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-03 01:00:39.487696 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-03 01:00:39.487704 | orchestrator | 2025-09-03 01:00:39.487712 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-03 01:00:39.487720 | orchestrator | Wednesday 03 September 2025 00:57:09 +0000 (0:00:05.764) 0:00:11.025 *** 2025-09-03 01:00:39.487728 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-03 01:00:39.487737 | orchestrator | 2025-09-03 01:00:39.487745 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-03 01:00:39.487753 | orchestrator | Wednesday 03 September 2025 00:57:12 +0000 (0:00:02.876) 0:00:13.901 *** 2025-09-03 01:00:39.487761 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-03 01:00:39.487769 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-03 01:00:39.487777 | orchestrator | 2025-09-03 01:00:39.487785 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-03 01:00:39.487793 | orchestrator | Wednesday 03 September 2025 00:57:15 +0000 (0:00:03.255) 0:00:17.157 *** 2025-09-03 01:00:39.487801 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-03 01:00:39.487809 | orchestrator | 2025-09-03 01:00:39.487817 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-03 01:00:39.487825 | orchestrator | Wednesday 03 September 2025 00:57:18 +0000 (0:00:03.095) 0:00:20.252 *** 2025-09-03 01:00:39.487833 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-03 01:00:39.487841 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-03 01:00:39.487849 | orchestrator | 2025-09-03 01:00:39.487857 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-03 01:00:39.487865 | orchestrator | Wednesday 03 September 2025 00:57:26 +0000 (0:00:07.603) 0:00:27.856 *** 2025-09-03 01:00:39.487876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-03 01:00:39.487912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-03 01:00:39.487922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-03 01:00:39.487979 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.487989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.488006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.488026 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.488035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.488045 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.488055 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.488063 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.488429 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.488445 | orchestrator | 2025-09-03 01:00:39.488467 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-03 01:00:39.488481 | orchestrator | Wednesday 03 September 2025 00:57:28 +0000 (0:00:02.284) 0:00:30.140 *** 2025-09-03 01:00:39.488495 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:00:39.488510 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:00:39.488525 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:00:39.488547 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:00:39.488557 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:00:39.488565 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:00:39.488573 | orchestrator | 2025-09-03 01:00:39.488581 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-03 01:00:39.488589 | orchestrator | Wednesday 03 September 2025 00:57:29 +0000 (0:00:00.427) 0:00:30.568 *** 2025-09-03 01:00:39.488597 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:00:39.488605 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:00:39.488613 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:00:39.488622 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 01:00:39.488630 | orchestrator | 2025-09-03 01:00:39.488638 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-03 01:00:39.488646 | orchestrator | Wednesday 03 September 2025 00:57:30 +0000 (0:00:00.794) 0:00:31.362 *** 2025-09-03 01:00:39.488654 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-03 01:00:39.488663 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-03 01:00:39.488671 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-03 01:00:39.488679 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-03 01:00:39.488687 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-03 01:00:39.488694 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-03 01:00:39.488702 | orchestrator | 2025-09-03 01:00:39.488710 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-03 01:00:39.488718 | orchestrator | Wednesday 03 September 2025 00:57:31 +0000 (0:00:01.498) 0:00:32.861 *** 2025-09-03 01:00:39.488728 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-03 01:00:39.488745 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-03 01:00:39.488755 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-03 01:00:39.488776 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-03 01:00:39.488785 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-03 01:00:39.488794 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-03 01:00:39.488809 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-03 01:00:39.488820 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-03 01:00:39.488839 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-03 01:00:39.488848 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-03 01:00:39.488859 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-03 01:00:39.488874 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-03 01:00:39.488883 | orchestrator | 2025-09-03 01:00:39.489211 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-03 01:00:39.489223 | orchestrator | Wednesday 03 September 2025 00:57:35 +0000 (0:00:03.608) 0:00:36.469 *** 2025-09-03 01:00:39.489231 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-03 01:00:39.489241 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-03 01:00:39.489249 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-03 01:00:39.489257 | orchestrator | 2025-09-03 01:00:39.489265 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-03 01:00:39.489273 | orchestrator | Wednesday 03 September 2025 00:57:36 +0000 (0:00:01.811) 0:00:38.281 *** 2025-09-03 01:00:39.489281 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-03 01:00:39.489289 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-03 01:00:39.489297 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-03 01:00:39.489305 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-03 01:00:39.489313 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-03 01:00:39.489401 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-03 01:00:39.489413 | orchestrator | 2025-09-03 01:00:39.489421 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-03 01:00:39.489430 | orchestrator | Wednesday 03 September 2025 00:57:40 +0000 (0:00:03.185) 0:00:41.466 *** 2025-09-03 01:00:39.489438 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-03 01:00:39.489453 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-03 01:00:39.489461 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-03 01:00:39.489469 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-03 01:00:39.489477 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-03 01:00:39.489485 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-03 01:00:39.489493 | orchestrator | 2025-09-03 01:00:39.489501 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-03 01:00:39.489509 | orchestrator | Wednesday 03 September 2025 00:57:41 +0000 (0:00:01.026) 0:00:42.492 *** 2025-09-03 01:00:39.489517 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:00:39.489526 | orchestrator | 2025-09-03 01:00:39.489534 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-03 01:00:39.489541 | orchestrator | Wednesday 03 September 2025 00:57:41 +0000 (0:00:00.147) 0:00:42.640 *** 2025-09-03 01:00:39.489563 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:00:39.489572 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:00:39.489580 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:00:39.489588 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:00:39.489595 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:00:39.489603 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:00:39.489611 | orchestrator | 2025-09-03 01:00:39.489619 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-03 01:00:39.489627 | orchestrator | Wednesday 03 September 2025 00:57:42 +0000 (0:00:01.098) 0:00:43.738 *** 2025-09-03 01:00:39.489637 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 01:00:39.489646 | orchestrator | 2025-09-03 01:00:39.489654 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-03 01:00:39.489662 | orchestrator | Wednesday 03 September 2025 00:57:43 +0000 (0:00:01.351) 0:00:45.090 *** 2025-09-03 01:00:39.489671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-03 01:00:39.489681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-03 01:00:39.489715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-03 01:00:39.489730 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.489762 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.489772 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.489781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.489790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.489828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.489845 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.489854 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.489862 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.489870 | orchestrator | 2025-09-03 01:00:39.489878 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-03 01:00:39.489887 | orchestrator | Wednesday 03 September 2025 00:57:46 +0000 (0:00:02.607) 0:00:47.697 *** 2025-09-03 01:00:39.489895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-03 01:00:39.489942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.489963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-03 01:00:39.489972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.489981 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:00:39.489989 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:00:39.489998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-03 01:00:39.490006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.490061 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:00:39.490074 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.490099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.490110 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:00:39.490120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.490130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.490140 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:00:39.490150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.490161 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.490177 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:00:39.490186 | orchestrator | 2025-09-03 01:00:39.490196 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-03 01:00:39.490206 | orchestrator | Wednesday 03 September 2025 00:57:47 +0000 (0:00:01.358) 0:00:49.056 *** 2025-09-03 01:00:39.490225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-03 01:00:39.490237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.490247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-03 01:00:39.490257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.490267 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:00:39.490277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-03 01:00:39.490298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.490308 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:00:39.490322 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:00:39.490332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.490343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.490353 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:00:39.490363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.490372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.490385 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:00:39.490399 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.490411 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.490420 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:00:39.490428 | orchestrator | 2025-09-03 01:00:39.490436 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-03 01:00:39.490445 | orchestrator | Wednesday 03 September 2025 00:57:49 +0000 (0:00:02.138) 0:00:51.195 *** 2025-09-03 01:00:39.490453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-03 01:00:39.490462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-03 01:00:39.490478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-03 01:00:39.490496 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.490505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.490514 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.490522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.490531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.490544 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.490562 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.490571 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.490580 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.490588 | orchestrator | 2025-09-03 01:00:39.490596 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-03 01:00:39.490605 | orchestrator | Wednesday 03 September 2025 00:57:53 +0000 (0:00:03.330) 0:00:54.525 *** 2025-09-03 01:00:39.490613 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-03 01:00:39.490622 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:00:39.490630 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-03 01:00:39.490643 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:00:39.490652 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-03 01:00:39.490660 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-03 01:00:39.490668 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:00:39.490676 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-03 01:00:39.490684 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-03 01:00:39.490692 | orchestrator | 2025-09-03 01:00:39.490700 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-03 01:00:39.490708 | orchestrator | Wednesday 03 September 2025 00:57:55 +0000 (0:00:02.267) 0:00:56.793 *** 2025-09-03 01:00:39.490716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-03 01:00:39.490734 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.490743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-03 01:00:39.490752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-03 01:00:39.490765 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.490778 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.490791 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.490860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.490870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.490885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.490894 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.490902 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.490911 | orchestrator | 2025-09-03 01:00:39.490919 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-03 01:00:39.490976 | orchestrator | Wednesday 03 September 2025 00:58:05 +0000 (0:00:10.178) 0:01:06.972 *** 2025-09-03 01:00:39.490992 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:00:39.491001 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:00:39.491009 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:00:39.491018 | orchestrator | changed: [testbed-node-3] 2025-09-03 01:00:39.491026 | orchestrator | changed: [testbed-node-4] 2025-09-03 01:00:39.491034 | orchestrator | changed: [testbed-node-5] 2025-09-03 01:00:39.491042 | orchestrator | 2025-09-03 01:00:39.491055 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-03 01:00:39.491063 | orchestrator | Wednesday 03 September 2025 00:58:07 +0000 (0:00:02.320) 0:01:09.292 *** 2025-09-03 01:00:39.491072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-03 01:00:39.491081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.491095 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:00:39.491104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-03 01:00:39.491112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.491121 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:00:39.491134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-03 01:00:39.491146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.491155 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:00:39.491164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.491177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.491186 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:00:39.491194 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.491203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.491212 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:00:39.491230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.491240 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-03 01:00:39.491254 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:00:39.491262 | orchestrator | 2025-09-03 01:00:39.491270 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-03 01:00:39.491279 | orchestrator | Wednesday 03 September 2025 00:58:09 +0000 (0:00:01.740) 0:01:11.033 *** 2025-09-03 01:00:39.491287 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:00:39.491295 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:00:39.491303 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:00:39.491311 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:00:39.491319 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:00:39.491327 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:00:39.491335 | orchestrator | 2025-09-03 01:00:39.491343 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-03 01:00:39.491351 | orchestrator | Wednesday 03 September 2025 00:58:10 +0000 (0:00:00.961) 0:01:11.994 *** 2025-09-03 01:00:39.491360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-03 01:00:39.491368 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.491389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-03 01:00:39.491404 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.491413 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.491422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-03 01:00:39.491430 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.491526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.491546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.491554 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.491561 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.491568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:00:39.491575 | orchestrator | 2025-09-03 01:00:39.491582 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-03 01:00:39.491589 | orchestrator | Wednesday 03 September 2025 00:58:13 +0000 (0:00:02.344) 0:01:14.339 *** 2025-09-03 01:00:39.491596 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:00:39.491603 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:00:39.491610 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:00:39.491617 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:00:39.491623 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:00:39.491630 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:00:39.491637 | orchestrator | 2025-09-03 01:00:39.491644 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-03 01:00:39.491651 | orchestrator | Wednesday 03 September 2025 00:58:13 +0000 (0:00:00.772) 0:01:15.111 *** 2025-09-03 01:00:39.491658 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:00:39.491665 | orchestrator | 2025-09-03 01:00:39.491671 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-03 01:00:39.491678 | orchestrator | Wednesday 03 September 2025 00:58:16 +0000 (0:00:02.444) 0:01:17.556 *** 2025-09-03 01:00:39.491685 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:00:39.491692 | orchestrator | 2025-09-03 01:00:39.491699 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-03 01:00:39.491711 | orchestrator | Wednesday 03 September 2025 00:58:18 +0000 (0:00:01.881) 0:01:19.437 *** 2025-09-03 01:00:39.491717 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:00:39.491724 | orchestrator | 2025-09-03 01:00:39.491731 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-03 01:00:39.491738 | orchestrator | Wednesday 03 September 2025 00:58:34 +0000 (0:00:16.306) 0:01:35.744 *** 2025-09-03 01:00:39.491745 | orchestrator | 2025-09-03 01:00:39.491755 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-03 01:00:39.491762 | orchestrator | Wednesday 03 September 2025 00:58:34 +0000 (0:00:00.057) 0:01:35.802 *** 2025-09-03 01:00:39.491769 | orchestrator | 2025-09-03 01:00:39.491776 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-03 01:00:39.491786 | orchestrator | Wednesday 03 September 2025 00:58:34 +0000 (0:00:00.056) 0:01:35.859 *** 2025-09-03 01:00:39.491793 | orchestrator | 2025-09-03 01:00:39.491800 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-03 01:00:39.491807 | orchestrator | Wednesday 03 September 2025 00:58:34 +0000 (0:00:00.058) 0:01:35.917 *** 2025-09-03 01:00:39.491814 | orchestrator | 2025-09-03 01:00:39.491820 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-03 01:00:39.491827 | orchestrator | Wednesday 03 September 2025 00:58:34 +0000 (0:00:00.060) 0:01:35.978 *** 2025-09-03 01:00:39.491834 | orchestrator | 2025-09-03 01:00:39.491840 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-03 01:00:39.491847 | orchestrator | Wednesday 03 September 2025 00:58:34 +0000 (0:00:00.058) 0:01:36.036 *** 2025-09-03 01:00:39.491854 | orchestrator | 2025-09-03 01:00:39.491860 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-03 01:00:39.491867 | orchestrator | Wednesday 03 September 2025 00:58:34 +0000 (0:00:00.059) 0:01:36.096 *** 2025-09-03 01:00:39.491874 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:00:39.491880 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:00:39.491887 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:00:39.491894 | orchestrator | 2025-09-03 01:00:39.491901 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-03 01:00:39.491908 | orchestrator | Wednesday 03 September 2025 00:58:57 +0000 (0:00:22.625) 0:01:58.722 *** 2025-09-03 01:00:39.491914 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:00:39.491921 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:00:39.491966 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:00:39.491973 | orchestrator | 2025-09-03 01:00:39.491980 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-03 01:00:39.491987 | orchestrator | Wednesday 03 September 2025 00:59:03 +0000 (0:00:06.026) 0:02:04.748 *** 2025-09-03 01:00:39.491993 | orchestrator | changed: [testbed-node-4] 2025-09-03 01:00:39.492000 | orchestrator | changed: [testbed-node-5] 2025-09-03 01:00:39.492007 | orchestrator | changed: [testbed-node-3] 2025-09-03 01:00:39.492014 | orchestrator | 2025-09-03 01:00:39.492021 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-03 01:00:39.492028 | orchestrator | Wednesday 03 September 2025 01:00:23 +0000 (0:01:20.432) 0:03:25.181 *** 2025-09-03 01:00:39.492035 | orchestrator | changed: [testbed-node-3] 2025-09-03 01:00:39.492042 | orchestrator | changed: [testbed-node-4] 2025-09-03 01:00:39.492049 | orchestrator | changed: [testbed-node-5] 2025-09-03 01:00:39.492056 | orchestrator | 2025-09-03 01:00:39.492062 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-03 01:00:39.492069 | orchestrator | Wednesday 03 September 2025 01:00:35 +0000 (0:00:11.944) 0:03:37.125 *** 2025-09-03 01:00:39.492076 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:00:39.492083 | orchestrator | 2025-09-03 01:00:39.492090 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 01:00:39.492097 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-03 01:00:39.492112 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-03 01:00:39.492119 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-03 01:00:39.492126 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-03 01:00:39.492133 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-03 01:00:39.492140 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-03 01:00:39.492147 | orchestrator | 2025-09-03 01:00:39.492154 | orchestrator | 2025-09-03 01:00:39.492162 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 01:00:39.492170 | orchestrator | Wednesday 03 September 2025 01:00:36 +0000 (0:00:01.168) 0:03:38.294 *** 2025-09-03 01:00:39.492178 | orchestrator | =============================================================================== 2025-09-03 01:00:39.492186 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 80.43s 2025-09-03 01:00:39.492194 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 22.63s 2025-09-03 01:00:39.492201 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 16.31s 2025-09-03 01:00:39.492208 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.94s 2025-09-03 01:00:39.492216 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.18s 2025-09-03 01:00:39.492223 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.60s 2025-09-03 01:00:39.492231 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 6.03s 2025-09-03 01:00:39.492239 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.76s 2025-09-03 01:00:39.492251 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.61s 2025-09-03 01:00:39.492259 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.33s 2025-09-03 01:00:39.492267 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.26s 2025-09-03 01:00:39.492274 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.19s 2025-09-03 01:00:39.492286 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.10s 2025-09-03 01:00:39.492293 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.07s 2025-09-03 01:00:39.492300 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.88s 2025-09-03 01:00:39.492308 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 2.61s 2025-09-03 01:00:39.492315 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.44s 2025-09-03 01:00:39.492323 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.34s 2025-09-03 01:00:39.492330 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.32s 2025-09-03 01:00:39.492338 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.28s 2025-09-03 01:00:39.492345 | orchestrator | 2025-09-03 01:00:39 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:00:39.492353 | orchestrator | 2025-09-03 01:00:39 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:00:39.492361 | orchestrator | 2025-09-03 01:00:39 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:00:39.492369 | orchestrator | 2025-09-03 01:00:39 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:00:39.492381 | orchestrator | 2025-09-03 01:00:39 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:00:42.515519 | orchestrator | 2025-09-03 01:00:42 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:00:42.516014 | orchestrator | 2025-09-03 01:00:42 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:00:42.516331 | orchestrator | 2025-09-03 01:00:42 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:00:42.516915 | orchestrator | 2025-09-03 01:00:42 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:00:42.516965 | orchestrator | 2025-09-03 01:00:42 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:00:45.545596 | orchestrator | 2025-09-03 01:00:45 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:00:45.546355 | orchestrator | 2025-09-03 01:00:45 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:00:45.546838 | orchestrator | 2025-09-03 01:00:45 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:00:45.547568 | orchestrator | 2025-09-03 01:00:45 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:00:45.547590 | orchestrator | 2025-09-03 01:00:45 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:00:48.569403 | orchestrator | 2025-09-03 01:00:48 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:00:48.569507 | orchestrator | 2025-09-03 01:00:48 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:00:48.570095 | orchestrator | 2025-09-03 01:00:48 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:00:48.570646 | orchestrator | 2025-09-03 01:00:48 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:00:48.570666 | orchestrator | 2025-09-03 01:00:48 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:00:51.599724 | orchestrator | 2025-09-03 01:00:51 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:00:51.600178 | orchestrator | 2025-09-03 01:00:51 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:00:51.601256 | orchestrator | 2025-09-03 01:00:51 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:00:51.601862 | orchestrator | 2025-09-03 01:00:51 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:00:51.601884 | orchestrator | 2025-09-03 01:00:51 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:00:54.629163 | orchestrator | 2025-09-03 01:00:54 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:00:54.631337 | orchestrator | 2025-09-03 01:00:54 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:00:54.633271 | orchestrator | 2025-09-03 01:00:54 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:00:54.634877 | orchestrator | 2025-09-03 01:00:54 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:00:54.635133 | orchestrator | 2025-09-03 01:00:54 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:00:57.664869 | orchestrator | 2025-09-03 01:00:57 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:00:57.665038 | orchestrator | 2025-09-03 01:00:57 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:00:57.665464 | orchestrator | 2025-09-03 01:00:57 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:00:57.666219 | orchestrator | 2025-09-03 01:00:57 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:00:57.666250 | orchestrator | 2025-09-03 01:00:57 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:01:00.706873 | orchestrator | 2025-09-03 01:01:00 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:01:00.707423 | orchestrator | 2025-09-03 01:01:00 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:01:00.708230 | orchestrator | 2025-09-03 01:01:00 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:01:00.708989 | orchestrator | 2025-09-03 01:01:00 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:01:00.709006 | orchestrator | 2025-09-03 01:01:00 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:01:03.735484 | orchestrator | 2025-09-03 01:01:03 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:01:03.735617 | orchestrator | 2025-09-03 01:01:03 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:01:03.735921 | orchestrator | 2025-09-03 01:01:03 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:01:03.736554 | orchestrator | 2025-09-03 01:01:03 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:01:03.736576 | orchestrator | 2025-09-03 01:01:03 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:01:06.771096 | orchestrator | 2025-09-03 01:01:06 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:01:06.771343 | orchestrator | 2025-09-03 01:01:06 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:01:06.772089 | orchestrator | 2025-09-03 01:01:06 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:01:06.775478 | orchestrator | 2025-09-03 01:01:06 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:01:06.775598 | orchestrator | 2025-09-03 01:01:06 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:01:09.796609 | orchestrator | 2025-09-03 01:01:09 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:01:09.796739 | orchestrator | 2025-09-03 01:01:09 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:01:09.797138 | orchestrator | 2025-09-03 01:01:09 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:01:09.797624 | orchestrator | 2025-09-03 01:01:09 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:01:09.797645 | orchestrator | 2025-09-03 01:01:09 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:01:12.834433 | orchestrator | 2025-09-03 01:01:12 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:01:12.834659 | orchestrator | 2025-09-03 01:01:12 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:01:12.835640 | orchestrator | 2025-09-03 01:01:12 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:01:12.838181 | orchestrator | 2025-09-03 01:01:12 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:01:12.838210 | orchestrator | 2025-09-03 01:01:12 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:01:15.865435 | orchestrator | 2025-09-03 01:01:15 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:01:15.865580 | orchestrator | 2025-09-03 01:01:15 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:01:15.866110 | orchestrator | 2025-09-03 01:01:15 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:01:15.866704 | orchestrator | 2025-09-03 01:01:15 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:01:15.866726 | orchestrator | 2025-09-03 01:01:15 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:01:18.892802 | orchestrator | 2025-09-03 01:01:18 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:01:18.894305 | orchestrator | 2025-09-03 01:01:18 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:01:18.894815 | orchestrator | 2025-09-03 01:01:18 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:01:18.895304 | orchestrator | 2025-09-03 01:01:18 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:01:18.895328 | orchestrator | 2025-09-03 01:01:18 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:01:21.919597 | orchestrator | 2025-09-03 01:01:21 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:01:21.919711 | orchestrator | 2025-09-03 01:01:21 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:01:21.920209 | orchestrator | 2025-09-03 01:01:21 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:01:21.920744 | orchestrator | 2025-09-03 01:01:21 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:01:21.920769 | orchestrator | 2025-09-03 01:01:21 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:01:24.950504 | orchestrator | 2025-09-03 01:01:24 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:01:24.953415 | orchestrator | 2025-09-03 01:01:24 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:01:24.953967 | orchestrator | 2025-09-03 01:01:24 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:01:24.954552 | orchestrator | 2025-09-03 01:01:24 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:01:24.954659 | orchestrator | 2025-09-03 01:01:24 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:01:27.993624 | orchestrator | 2025-09-03 01:01:27 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:01:27.993742 | orchestrator | 2025-09-03 01:01:27 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:01:27.996137 | orchestrator | 2025-09-03 01:01:27 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:01:27.997915 | orchestrator | 2025-09-03 01:01:27 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:01:27.997986 | orchestrator | 2025-09-03 01:01:27 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:01:31.022912 | orchestrator | 2025-09-03 01:01:31 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:01:31.023070 | orchestrator | 2025-09-03 01:01:31 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:01:31.023394 | orchestrator | 2025-09-03 01:01:31 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:01:31.023910 | orchestrator | 2025-09-03 01:01:31 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:01:31.024071 | orchestrator | 2025-09-03 01:01:31 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:01:34.043820 | orchestrator | 2025-09-03 01:01:34 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:01:34.043976 | orchestrator | 2025-09-03 01:01:34 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:01:34.044644 | orchestrator | 2025-09-03 01:01:34 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:01:34.045055 | orchestrator | 2025-09-03 01:01:34 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:01:34.045153 | orchestrator | 2025-09-03 01:01:34 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:01:37.070853 | orchestrator | 2025-09-03 01:01:37 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:01:37.071031 | orchestrator | 2025-09-03 01:01:37 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:01:37.071334 | orchestrator | 2025-09-03 01:01:37 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:01:37.072009 | orchestrator | 2025-09-03 01:01:37 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:01:37.072032 | orchestrator | 2025-09-03 01:01:37 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:01:40.095294 | orchestrator | 2025-09-03 01:01:40 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:01:40.096775 | orchestrator | 2025-09-03 01:01:40 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:01:40.097234 | orchestrator | 2025-09-03 01:01:40 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:01:40.097850 | orchestrator | 2025-09-03 01:01:40 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:01:40.097973 | orchestrator | 2025-09-03 01:01:40 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:01:43.123559 | orchestrator | 2025-09-03 01:01:43 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:01:43.124010 | orchestrator | 2025-09-03 01:01:43 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:01:43.126280 | orchestrator | 2025-09-03 01:01:43 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:01:43.128137 | orchestrator | 2025-09-03 01:01:43 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:01:43.128281 | orchestrator | 2025-09-03 01:01:43 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:01:46.164633 | orchestrator | 2025-09-03 01:01:46 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:01:46.165396 | orchestrator | 2025-09-03 01:01:46 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:01:46.166896 | orchestrator | 2025-09-03 01:01:46 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:01:46.168010 | orchestrator | 2025-09-03 01:01:46 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:01:46.168294 | orchestrator | 2025-09-03 01:01:46 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:01:49.195410 | orchestrator | 2025-09-03 01:01:49 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:01:49.195716 | orchestrator | 2025-09-03 01:01:49 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:01:49.197148 | orchestrator | 2025-09-03 01:01:49 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:01:49.198236 | orchestrator | 2025-09-03 01:01:49 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:01:49.198262 | orchestrator | 2025-09-03 01:01:49 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:01:52.224425 | orchestrator | 2025-09-03 01:01:52 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:01:52.224875 | orchestrator | 2025-09-03 01:01:52 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:01:52.225622 | orchestrator | 2025-09-03 01:01:52 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:01:52.226475 | orchestrator | 2025-09-03 01:01:52 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:01:52.226487 | orchestrator | 2025-09-03 01:01:52 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:01:55.259101 | orchestrator | 2025-09-03 01:01:55 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:01:55.259465 | orchestrator | 2025-09-03 01:01:55 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:01:55.260531 | orchestrator | 2025-09-03 01:01:55 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:01:55.261076 | orchestrator | 2025-09-03 01:01:55 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:01:55.261098 | orchestrator | 2025-09-03 01:01:55 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:01:58.303358 | orchestrator | 2025-09-03 01:01:58 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:01:58.304076 | orchestrator | 2025-09-03 01:01:58 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:01:58.308214 | orchestrator | 2025-09-03 01:01:58 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:01:58.309498 | orchestrator | 2025-09-03 01:01:58 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state STARTED 2025-09-03 01:01:58.309526 | orchestrator | 2025-09-03 01:01:58 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:02:01.342501 | orchestrator | 2025-09-03 01:02:01 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:02:01.343174 | orchestrator | 2025-09-03 01:02:01 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:02:01.344109 | orchestrator | 2025-09-03 01:02:01 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:02:01.344907 | orchestrator | 2025-09-03 01:02:01 | INFO  | Task 1b91eb34-3356-465e-aa02-fca48e43b3fc is in state STARTED 2025-09-03 01:02:01.346437 | orchestrator | 2025-09-03 01:02:01 | INFO  | Task 0565caac-e459-42a0-9d02-66460dafce6f is in state SUCCESS 2025-09-03 01:02:01.346618 | orchestrator | 2025-09-03 01:02:01 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:02:01.348361 | orchestrator | 2025-09-03 01:02:01.348394 | orchestrator | 2025-09-03 01:02:01.348407 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 01:02:01.348420 | orchestrator | 2025-09-03 01:02:01.348432 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 01:02:01.348444 | orchestrator | Wednesday 03 September 2025 01:00:02 +0000 (0:00:00.250) 0:00:00.250 *** 2025-09-03 01:02:01.348457 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:02:01.348473 | orchestrator | ok: [testbed-node-1] 2025-09-03 01:02:01.348485 | orchestrator | ok: [testbed-node-2] 2025-09-03 01:02:01.348775 | orchestrator | 2025-09-03 01:02:01.348819 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 01:02:01.348832 | orchestrator | Wednesday 03 September 2025 01:00:03 +0000 (0:00:00.275) 0:00:00.525 *** 2025-09-03 01:02:01.348844 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-03 01:02:01.348856 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-03 01:02:01.348867 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-03 01:02:01.348878 | orchestrator | 2025-09-03 01:02:01.348890 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-03 01:02:01.348901 | orchestrator | 2025-09-03 01:02:01.348913 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-03 01:02:01.348947 | orchestrator | Wednesday 03 September 2025 01:00:03 +0000 (0:00:00.377) 0:00:00.902 *** 2025-09-03 01:02:01.348959 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 01:02:01.348971 | orchestrator | 2025-09-03 01:02:01.348983 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-03 01:02:01.348994 | orchestrator | Wednesday 03 September 2025 01:00:04 +0000 (0:00:00.505) 0:00:01.408 *** 2025-09-03 01:02:01.349006 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-03 01:02:01.349017 | orchestrator | 2025-09-03 01:02:01.349029 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-03 01:02:01.349040 | orchestrator | Wednesday 03 September 2025 01:00:07 +0000 (0:00:03.306) 0:00:04.714 *** 2025-09-03 01:02:01.349051 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-03 01:02:01.349063 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-03 01:02:01.349074 | orchestrator | 2025-09-03 01:02:01.349085 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-03 01:02:01.349096 | orchestrator | Wednesday 03 September 2025 01:00:13 +0000 (0:00:06.149) 0:00:10.864 *** 2025-09-03 01:02:01.349108 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-03 01:02:01.349119 | orchestrator | 2025-09-03 01:02:01.349131 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-03 01:02:01.349142 | orchestrator | Wednesday 03 September 2025 01:00:16 +0000 (0:00:03.048) 0:00:13.913 *** 2025-09-03 01:02:01.349154 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-03 01:02:01.349165 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-03 01:02:01.349176 | orchestrator | 2025-09-03 01:02:01.349188 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-03 01:02:01.349199 | orchestrator | Wednesday 03 September 2025 01:00:20 +0000 (0:00:03.930) 0:00:17.843 *** 2025-09-03 01:02:01.349210 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-03 01:02:01.349222 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-03 01:02:01.349233 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-03 01:02:01.349245 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-03 01:02:01.349256 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-03 01:02:01.349267 | orchestrator | 2025-09-03 01:02:01.349279 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-03 01:02:01.349290 | orchestrator | Wednesday 03 September 2025 01:00:35 +0000 (0:00:15.240) 0:00:33.083 *** 2025-09-03 01:02:01.349330 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-03 01:02:01.349342 | orchestrator | 2025-09-03 01:02:01.349353 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-03 01:02:01.349364 | orchestrator | Wednesday 03 September 2025 01:00:40 +0000 (0:00:04.236) 0:00:37.320 *** 2025-09-03 01:02:01.349391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-03 01:02:01.349433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-03 01:02:01.349449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-03 01:02:01.349464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.349479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.349506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.349532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.349549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.349564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.349577 | orchestrator | 2025-09-03 01:02:01.349590 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-03 01:02:01.349604 | orchestrator | Wednesday 03 September 2025 01:00:42 +0000 (0:00:02.090) 0:00:39.410 *** 2025-09-03 01:02:01.349618 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-03 01:02:01.349632 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-03 01:02:01.349645 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-03 01:02:01.349658 | orchestrator | 2025-09-03 01:02:01.349671 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-03 01:02:01.349700 | orchestrator | Wednesday 03 September 2025 01:00:43 +0000 (0:00:01.782) 0:00:41.192 *** 2025-09-03 01:02:01.349725 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:02:01.349740 | orchestrator | 2025-09-03 01:02:01.349753 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-03 01:02:01.349764 | orchestrator | Wednesday 03 September 2025 01:00:44 +0000 (0:00:00.128) 0:00:41.321 *** 2025-09-03 01:02:01.349776 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:02:01.349787 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:02:01.349798 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:02:01.349809 | orchestrator | 2025-09-03 01:02:01.349821 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-03 01:02:01.349832 | orchestrator | Wednesday 03 September 2025 01:00:44 +0000 (0:00:00.709) 0:00:42.031 *** 2025-09-03 01:02:01.349843 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 01:02:01.349861 | orchestrator | 2025-09-03 01:02:01.349873 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-03 01:02:01.349884 | orchestrator | Wednesday 03 September 2025 01:00:45 +0000 (0:00:00.491) 0:00:42.522 *** 2025-09-03 01:02:01.349895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-03 01:02:01.349949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-03 01:02:01.349963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-03 01:02:01.349975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.349988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.350087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.350124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.350156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.350177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.350196 | orchestrator | 2025-09-03 01:02:01.350216 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-03 01:02:01.350234 | orchestrator | Wednesday 03 September 2025 01:00:48 +0000 (0:00:03.706) 0:00:46.229 *** 2025-09-03 01:02:01.350253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-03 01:02:01.350284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-03 01:02:01.350297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:02:01.350309 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:02:01.350335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-03 01:02:01.350347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-03 01:02:01.350359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:02:01.350371 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:02:01.350383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-03 01:02:01.350404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-03 01:02:01.350416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:02:01.350427 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:02:01.350439 | orchestrator | 2025-09-03 01:02:01.350455 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-03 01:02:01.350466 | orchestrator | Wednesday 03 September 2025 01:00:49 +0000 (0:00:00.696) 0:00:46.925 *** 2025-09-03 01:02:01.350485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-03 01:02:01.350497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-03 01:02:01.350509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-03 01:02:01.350528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:02:01.350540 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:02:01.350552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-03 01:02:01.350568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:02:01.350580 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:02:01.350600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-03 01:02:01.350612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-03 01:02:01.350630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:02:01.350642 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:02:01.350653 | orchestrator | 2025-09-03 01:02:01.350664 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-03 01:02:01.350676 | orchestrator | Wednesday 03 September 2025 01:00:51 +0000 (0:00:01.355) 0:00:48.281 *** 2025-09-03 01:02:01.350687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-03 01:02:01.350709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-03 01:02:01.350722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-03 01:02:01.350740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.350752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.350763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.350775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.350797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.350809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.350821 | orchestrator | 2025-09-03 01:02:01.350832 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-03 01:02:01.350849 | orchestrator | Wednesday 03 September 2025 01:00:54 +0000 (0:00:03.824) 0:00:52.105 *** 2025-09-03 01:02:01.350861 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:02:01.350872 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:02:01.350883 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:02:01.350894 | orchestrator | 2025-09-03 01:02:01.350905 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-03 01:02:01.350916 | orchestrator | Wednesday 03 September 2025 01:00:57 +0000 (0:00:02.823) 0:00:54.931 *** 2025-09-03 01:02:01.351070 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-03 01:02:01.351089 | orchestrator | 2025-09-03 01:02:01.351101 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-03 01:02:01.351112 | orchestrator | Wednesday 03 September 2025 01:00:59 +0000 (0:00:01.472) 0:00:56.403 *** 2025-09-03 01:02:01.351123 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:02:01.351134 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:02:01.351145 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:02:01.351156 | orchestrator | 2025-09-03 01:02:01.351167 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-03 01:02:01.351178 | orchestrator | Wednesday 03 September 2025 01:00:59 +0000 (0:00:00.612) 0:00:57.016 *** 2025-09-03 01:02:01.351190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-03 01:02:01.351221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-03 01:02:01.351264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-03 01:02:01.351291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.351303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.351314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.351326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.351337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.351353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.351365 | orchestrator | 2025-09-03 01:02:01.351376 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-03 01:02:01.351394 | orchestrator | Wednesday 03 September 2025 01:01:07 +0000 (0:00:07.295) 0:01:04.312 *** 2025-09-03 01:02:01.351413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-03 01:02:01.351425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-03 01:02:01.351437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:02:01.351449 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:02:01.351460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-03 01:02:01.351480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-03 01:02:01.351503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:02:01.351514 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:02:01.351525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-03 01:02:01.351535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-03 01:02:01.351546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:02:01.351556 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:02:01.351566 | orchestrator | 2025-09-03 01:02:01.351576 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-03 01:02:01.351586 | orchestrator | Wednesday 03 September 2025 01:01:07 +0000 (0:00:00.653) 0:01:04.965 *** 2025-09-03 01:02:01.351600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-03 01:02:01.351623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-03 01:02:01.351634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-03 01:02:01.351644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.351655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.351665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.351680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.351704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.351715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:02:01.351725 | orchestrator | 2025-09-03 01:02:01.351736 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-03 01:02:01.351745 | orchestrator | Wednesday 03 September 2025 01:01:10 +0000 (0:00:02.579) 0:01:07.545 *** 2025-09-03 01:02:01.351755 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:02:01.351765 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:02:01.351775 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:02:01.351785 | orchestrator | 2025-09-03 01:02:01.351795 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-03 01:02:01.351805 | orchestrator | Wednesday 03 September 2025 01:01:10 +0000 (0:00:00.268) 0:01:07.814 *** 2025-09-03 01:02:01.351814 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:02:01.351824 | orchestrator | 2025-09-03 01:02:01.351835 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-03 01:02:01.351845 | orchestrator | Wednesday 03 September 2025 01:01:12 +0000 (0:00:02.251) 0:01:10.066 *** 2025-09-03 01:02:01.351854 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:02:01.351864 | orchestrator | 2025-09-03 01:02:01.351874 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-03 01:02:01.351884 | orchestrator | Wednesday 03 September 2025 01:01:14 +0000 (0:00:02.113) 0:01:12.180 *** 2025-09-03 01:02:01.351894 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:02:01.351904 | orchestrator | 2025-09-03 01:02:01.351914 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-03 01:02:01.351941 | orchestrator | Wednesday 03 September 2025 01:01:26 +0000 (0:00:11.109) 0:01:23.289 *** 2025-09-03 01:02:01.351951 | orchestrator | 2025-09-03 01:02:01.351961 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-03 01:02:01.351971 | orchestrator | Wednesday 03 September 2025 01:01:26 +0000 (0:00:00.130) 0:01:23.419 *** 2025-09-03 01:02:01.351981 | orchestrator | 2025-09-03 01:02:01.351990 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-03 01:02:01.352000 | orchestrator | Wednesday 03 September 2025 01:01:26 +0000 (0:00:00.123) 0:01:23.543 *** 2025-09-03 01:02:01.352010 | orchestrator | 2025-09-03 01:02:01.352019 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-03 01:02:01.352036 | orchestrator | Wednesday 03 September 2025 01:01:26 +0000 (0:00:00.098) 0:01:23.642 *** 2025-09-03 01:02:01.352046 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:02:01.352055 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:02:01.352065 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:02:01.352075 | orchestrator | 2025-09-03 01:02:01.352085 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-03 01:02:01.352094 | orchestrator | Wednesday 03 September 2025 01:01:37 +0000 (0:00:11.059) 0:01:34.701 *** 2025-09-03 01:02:01.352104 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:02:01.352114 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:02:01.352124 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:02:01.352133 | orchestrator | 2025-09-03 01:02:01.352143 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-03 01:02:01.352153 | orchestrator | Wednesday 03 September 2025 01:01:48 +0000 (0:00:11.086) 0:01:45.788 *** 2025-09-03 01:02:01.352162 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:02:01.352172 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:02:01.352182 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:02:01.352192 | orchestrator | 2025-09-03 01:02:01.352202 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 01:02:01.352213 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-03 01:02:01.352223 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-03 01:02:01.352237 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-03 01:02:01.352247 | orchestrator | 2025-09-03 01:02:01.352257 | orchestrator | 2025-09-03 01:02:01.352267 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 01:02:01.352277 | orchestrator | Wednesday 03 September 2025 01:01:59 +0000 (0:00:10.639) 0:01:56.427 *** 2025-09-03 01:02:01.352286 | orchestrator | =============================================================================== 2025-09-03 01:02:01.352296 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.24s 2025-09-03 01:02:01.352311 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.11s 2025-09-03 01:02:01.352322 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.09s 2025-09-03 01:02:01.352332 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.06s 2025-09-03 01:02:01.352342 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.64s 2025-09-03 01:02:01.352351 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 7.30s 2025-09-03 01:02:01.352361 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.15s 2025-09-03 01:02:01.352371 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.24s 2025-09-03 01:02:01.352381 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.93s 2025-09-03 01:02:01.352390 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.82s 2025-09-03 01:02:01.352400 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.71s 2025-09-03 01:02:01.352410 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.31s 2025-09-03 01:02:01.352419 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.05s 2025-09-03 01:02:01.352429 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.83s 2025-09-03 01:02:01.352439 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.58s 2025-09-03 01:02:01.352449 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.25s 2025-09-03 01:02:01.352458 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.11s 2025-09-03 01:02:01.352474 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.09s 2025-09-03 01:02:01.352484 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.78s 2025-09-03 01:02:01.352494 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.47s 2025-09-03 01:02:04.391394 | orchestrator | 2025-09-03 01:02:04 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:02:04.392109 | orchestrator | 2025-09-03 01:02:04 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:02:04.393571 | orchestrator | 2025-09-03 01:02:04 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:02:04.394564 | orchestrator | 2025-09-03 01:02:04 | INFO  | Task 1b91eb34-3356-465e-aa02-fca48e43b3fc is in state STARTED 2025-09-03 01:02:04.396207 | orchestrator | 2025-09-03 01:02:04 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:02:07.442647 | orchestrator | 2025-09-03 01:02:07 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:02:07.443029 | orchestrator | 2025-09-03 01:02:07 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:02:07.443438 | orchestrator | 2025-09-03 01:02:07 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:02:07.444252 | orchestrator | 2025-09-03 01:02:07 | INFO  | Task 1b91eb34-3356-465e-aa02-fca48e43b3fc is in state STARTED 2025-09-03 01:02:07.444276 | orchestrator | 2025-09-03 01:02:07 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:02:10.485879 | orchestrator | 2025-09-03 01:02:10 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:02:10.488606 | orchestrator | 2025-09-03 01:02:10 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:02:10.492035 | orchestrator | 2025-09-03 01:02:10 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:02:10.494874 | orchestrator | 2025-09-03 01:02:10 | INFO  | Task 1b91eb34-3356-465e-aa02-fca48e43b3fc is in state STARTED 2025-09-03 01:02:10.495348 | orchestrator | 2025-09-03 01:02:10 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:02:13.542135 | orchestrator | 2025-09-03 01:02:13 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:02:13.542750 | orchestrator | 2025-09-03 01:02:13 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:02:13.544290 | orchestrator | 2025-09-03 01:02:13 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:02:13.545462 | orchestrator | 2025-09-03 01:02:13 | INFO  | Task 1b91eb34-3356-465e-aa02-fca48e43b3fc is in state STARTED 2025-09-03 01:02:13.545684 | orchestrator | 2025-09-03 01:02:13 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:02:16.578805 | orchestrator | 2025-09-03 01:02:16 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:02:16.579298 | orchestrator | 2025-09-03 01:02:16 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:02:16.580349 | orchestrator | 2025-09-03 01:02:16 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:02:16.581108 | orchestrator | 2025-09-03 01:02:16 | INFO  | Task 1b91eb34-3356-465e-aa02-fca48e43b3fc is in state STARTED 2025-09-03 01:02:16.581290 | orchestrator | 2025-09-03 01:02:16 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:02:19.623127 | orchestrator | 2025-09-03 01:02:19 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:02:19.626199 | orchestrator | 2025-09-03 01:02:19 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:02:19.627538 | orchestrator | 2025-09-03 01:02:19 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:02:19.629383 | orchestrator | 2025-09-03 01:02:19 | INFO  | Task 1b91eb34-3356-465e-aa02-fca48e43b3fc is in state STARTED 2025-09-03 01:02:19.629585 | orchestrator | 2025-09-03 01:02:19 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:02:22.674687 | orchestrator | 2025-09-03 01:02:22 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:02:22.676744 | orchestrator | 2025-09-03 01:02:22 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:02:22.679028 | orchestrator | 2025-09-03 01:02:22 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:02:22.680686 | orchestrator | 2025-09-03 01:02:22 | INFO  | Task 1b91eb34-3356-465e-aa02-fca48e43b3fc is in state STARTED 2025-09-03 01:02:22.681306 | orchestrator | 2025-09-03 01:02:22 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:02:25.715686 | orchestrator | 2025-09-03 01:02:25 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:02:25.717335 | orchestrator | 2025-09-03 01:02:25 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:02:25.719400 | orchestrator | 2025-09-03 01:02:25 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:02:25.721594 | orchestrator | 2025-09-03 01:02:25 | INFO  | Task 1b91eb34-3356-465e-aa02-fca48e43b3fc is in state STARTED 2025-09-03 01:02:25.721618 | orchestrator | 2025-09-03 01:02:25 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:02:28.759146 | orchestrator | 2025-09-03 01:02:28 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:02:28.760067 | orchestrator | 2025-09-03 01:02:28 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:02:28.761186 | orchestrator | 2025-09-03 01:02:28 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:02:28.763450 | orchestrator | 2025-09-03 01:02:28 | INFO  | Task 1b91eb34-3356-465e-aa02-fca48e43b3fc is in state STARTED 2025-09-03 01:02:28.763484 | orchestrator | 2025-09-03 01:02:28 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:02:31.863254 | orchestrator | 2025-09-03 01:02:31 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:02:31.863451 | orchestrator | 2025-09-03 01:02:31 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:02:31.863484 | orchestrator | 2025-09-03 01:02:31 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:02:31.863998 | orchestrator | 2025-09-03 01:02:31 | INFO  | Task 1b91eb34-3356-465e-aa02-fca48e43b3fc is in state STARTED 2025-09-03 01:02:31.864022 | orchestrator | 2025-09-03 01:02:31 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:02:34.885197 | orchestrator | 2025-09-03 01:02:34 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:02:34.885305 | orchestrator | 2025-09-03 01:02:34 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:02:34.885421 | orchestrator | 2025-09-03 01:02:34 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:02:34.886761 | orchestrator | 2025-09-03 01:02:34 | INFO  | Task 1b91eb34-3356-465e-aa02-fca48e43b3fc is in state STARTED 2025-09-03 01:02:34.886819 | orchestrator | 2025-09-03 01:02:34 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:02:37.916591 | orchestrator | 2025-09-03 01:02:37 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:02:37.921161 | orchestrator | 2025-09-03 01:02:37 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:02:37.921626 | orchestrator | 2025-09-03 01:02:37 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:02:37.922323 | orchestrator | 2025-09-03 01:02:37 | INFO  | Task 1b91eb34-3356-465e-aa02-fca48e43b3fc is in state STARTED 2025-09-03 01:02:37.922348 | orchestrator | 2025-09-03 01:02:37 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:02:40.947415 | orchestrator | 2025-09-03 01:02:40 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:02:40.949333 | orchestrator | 2025-09-03 01:02:40 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:02:40.949660 | orchestrator | 2025-09-03 01:02:40 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:02:40.950313 | orchestrator | 2025-09-03 01:02:40 | INFO  | Task 1b91eb34-3356-465e-aa02-fca48e43b3fc is in state STARTED 2025-09-03 01:02:40.950341 | orchestrator | 2025-09-03 01:02:40 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:02:43.996542 | orchestrator | 2025-09-03 01:02:43 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:02:43.996746 | orchestrator | 2025-09-03 01:02:43 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:02:43.996779 | orchestrator | 2025-09-03 01:02:43 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:02:43.998288 | orchestrator | 2025-09-03 01:02:43 | INFO  | Task 1b91eb34-3356-465e-aa02-fca48e43b3fc is in state STARTED 2025-09-03 01:02:43.998314 | orchestrator | 2025-09-03 01:02:43 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:02:47.050734 | orchestrator | 2025-09-03 01:02:47 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:02:47.053701 | orchestrator | 2025-09-03 01:02:47 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:02:47.055395 | orchestrator | 2025-09-03 01:02:47 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:02:47.058196 | orchestrator | 2025-09-03 01:02:47 | INFO  | Task 1b91eb34-3356-465e-aa02-fca48e43b3fc is in state STARTED 2025-09-03 01:02:47.058223 | orchestrator | 2025-09-03 01:02:47 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:02:50.086954 | orchestrator | 2025-09-03 01:02:50 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:02:50.087531 | orchestrator | 2025-09-03 01:02:50 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:02:50.089268 | orchestrator | 2025-09-03 01:02:50 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:02:50.090308 | orchestrator | 2025-09-03 01:02:50 | INFO  | Task 1b91eb34-3356-465e-aa02-fca48e43b3fc is in state STARTED 2025-09-03 01:02:50.090406 | orchestrator | 2025-09-03 01:02:50 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:02:53.130717 | orchestrator | 2025-09-03 01:02:53 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:02:53.132081 | orchestrator | 2025-09-03 01:02:53 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:02:53.133352 | orchestrator | 2025-09-03 01:02:53 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:02:53.134851 | orchestrator | 2025-09-03 01:02:53 | INFO  | Task 1b91eb34-3356-465e-aa02-fca48e43b3fc is in state STARTED 2025-09-03 01:02:53.134877 | orchestrator | 2025-09-03 01:02:53 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:02:56.203409 | orchestrator | 2025-09-03 01:02:56 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:02:56.205436 | orchestrator | 2025-09-03 01:02:56 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:02:56.206991 | orchestrator | 2025-09-03 01:02:56 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:02:56.208535 | orchestrator | 2025-09-03 01:02:56 | INFO  | Task 1b91eb34-3356-465e-aa02-fca48e43b3fc is in state STARTED 2025-09-03 01:02:56.208842 | orchestrator | 2025-09-03 01:02:56 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:02:59.259109 | orchestrator | 2025-09-03 01:02:59 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:02:59.259541 | orchestrator | 2025-09-03 01:02:59 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:02:59.260461 | orchestrator | 2025-09-03 01:02:59 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:02:59.261162 | orchestrator | 2025-09-03 01:02:59 | INFO  | Task 1b91eb34-3356-465e-aa02-fca48e43b3fc is in state STARTED 2025-09-03 01:02:59.261263 | orchestrator | 2025-09-03 01:02:59 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:03:02.287433 | orchestrator | 2025-09-03 01:03:02 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:03:02.287558 | orchestrator | 2025-09-03 01:03:02 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:03:02.287897 | orchestrator | 2025-09-03 01:03:02 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:03:02.288576 | orchestrator | 2025-09-03 01:03:02 | INFO  | Task 1b91eb34-3356-465e-aa02-fca48e43b3fc is in state STARTED 2025-09-03 01:03:02.288666 | orchestrator | 2025-09-03 01:03:02 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:03:05.311692 | orchestrator | 2025-09-03 01:03:05 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:03:05.311968 | orchestrator | 2025-09-03 01:03:05 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:03:05.312490 | orchestrator | 2025-09-03 01:03:05 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:03:05.313156 | orchestrator | 2025-09-03 01:03:05 | INFO  | Task 1b91eb34-3356-465e-aa02-fca48e43b3fc is in state STARTED 2025-09-03 01:03:05.313179 | orchestrator | 2025-09-03 01:03:05 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:03:08.360390 | orchestrator | 2025-09-03 01:03:08 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:03:08.362092 | orchestrator | 2025-09-03 01:03:08 | INFO  | Task 88d17896-3116-442f-be7a-a99034bbb9d9 is in state STARTED 2025-09-03 01:03:08.365358 | orchestrator | 2025-09-03 01:03:08 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:03:08.367845 | orchestrator | 2025-09-03 01:03:08 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:03:08.370411 | orchestrator | 2025-09-03 01:03:08 | INFO  | Task 1b91eb34-3356-465e-aa02-fca48e43b3fc is in state SUCCESS 2025-09-03 01:03:08.370435 | orchestrator | 2025-09-03 01:03:08 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:03:11.405829 | orchestrator | 2025-09-03 01:03:11 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:03:11.408627 | orchestrator | 2025-09-03 01:03:11 | INFO  | Task 88d17896-3116-442f-be7a-a99034bbb9d9 is in state STARTED 2025-09-03 01:03:11.409148 | orchestrator | 2025-09-03 01:03:11 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:03:11.409797 | orchestrator | 2025-09-03 01:03:11 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:03:11.409817 | orchestrator | 2025-09-03 01:03:11 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:03:14.459431 | orchestrator | 2025-09-03 01:03:14 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:03:14.460037 | orchestrator | 2025-09-03 01:03:14 | INFO  | Task 88d17896-3116-442f-be7a-a99034bbb9d9 is in state STARTED 2025-09-03 01:03:14.460984 | orchestrator | 2025-09-03 01:03:14 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:03:14.462352 | orchestrator | 2025-09-03 01:03:14 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:03:14.462612 | orchestrator | 2025-09-03 01:03:14 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:03:17.506498 | orchestrator | 2025-09-03 01:03:17 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:03:17.507587 | orchestrator | 2025-09-03 01:03:17 | INFO  | Task 88d17896-3116-442f-be7a-a99034bbb9d9 is in state STARTED 2025-09-03 01:03:17.508694 | orchestrator | 2025-09-03 01:03:17 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:03:17.509982 | orchestrator | 2025-09-03 01:03:17 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:03:17.510061 | orchestrator | 2025-09-03 01:03:17 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:03:20.561868 | orchestrator | 2025-09-03 01:03:20 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:03:20.562652 | orchestrator | 2025-09-03 01:03:20 | INFO  | Task 88d17896-3116-442f-be7a-a99034bbb9d9 is in state STARTED 2025-09-03 01:03:20.564360 | orchestrator | 2025-09-03 01:03:20 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state STARTED 2025-09-03 01:03:20.566114 | orchestrator | 2025-09-03 01:03:20 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:03:20.566293 | orchestrator | 2025-09-03 01:03:20 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:03:23.611498 | orchestrator | 2025-09-03 01:03:23 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:03:23.612261 | orchestrator | 2025-09-03 01:03:23 | INFO  | Task 88d17896-3116-442f-be7a-a99034bbb9d9 is in state STARTED 2025-09-03 01:03:23.616530 | orchestrator | 2025-09-03 01:03:23.616565 | orchestrator | 2025-09-03 01:03:23.616577 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-03 01:03:23.616590 | orchestrator | 2025-09-03 01:03:23.616602 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-03 01:03:23.616613 | orchestrator | Wednesday 03 September 2025 01:02:03 +0000 (0:00:00.077) 0:00:00.077 *** 2025-09-03 01:03:23.616625 | orchestrator | changed: [localhost] 2025-09-03 01:03:23.616639 | orchestrator | 2025-09-03 01:03:23.616652 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-03 01:03:23.616665 | orchestrator | Wednesday 03 September 2025 01:02:04 +0000 (0:00:00.652) 0:00:00.730 *** 2025-09-03 01:03:23.616677 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2025-09-03 01:03:23.616689 | orchestrator | changed: [localhost] 2025-09-03 01:03:23.616729 | orchestrator | 2025-09-03 01:03:23.616742 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-09-03 01:03:23.616754 | orchestrator | Wednesday 03 September 2025 01:03:00 +0000 (0:00:55.713) 0:00:56.444 *** 2025-09-03 01:03:23.616766 | orchestrator | changed: [localhost] 2025-09-03 01:03:23.616777 | orchestrator | 2025-09-03 01:03:23.616789 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 01:03:23.616801 | orchestrator | 2025-09-03 01:03:23.616813 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 01:03:23.616824 | orchestrator | Wednesday 03 September 2025 01:03:04 +0000 (0:00:04.402) 0:01:00.847 *** 2025-09-03 01:03:23.616836 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:03:23.616848 | orchestrator | ok: [testbed-node-1] 2025-09-03 01:03:23.616860 | orchestrator | ok: [testbed-node-2] 2025-09-03 01:03:23.616871 | orchestrator | 2025-09-03 01:03:23.616883 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 01:03:23.616895 | orchestrator | Wednesday 03 September 2025 01:03:04 +0000 (0:00:00.245) 0:01:01.092 *** 2025-09-03 01:03:23.616907 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-09-03 01:03:23.617282 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-09-03 01:03:23.617303 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-09-03 01:03:23.617314 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-09-03 01:03:23.617325 | orchestrator | 2025-09-03 01:03:23.617336 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-09-03 01:03:23.617347 | orchestrator | skipping: no hosts matched 2025-09-03 01:03:23.617359 | orchestrator | 2025-09-03 01:03:23.617370 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 01:03:23.617382 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 01:03:23.617396 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 01:03:23.617408 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 01:03:23.617419 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 01:03:23.617430 | orchestrator | 2025-09-03 01:03:23.617441 | orchestrator | 2025-09-03 01:03:23.617452 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 01:03:23.617463 | orchestrator | Wednesday 03 September 2025 01:03:05 +0000 (0:00:00.540) 0:01:01.632 *** 2025-09-03 01:03:23.617474 | orchestrator | =============================================================================== 2025-09-03 01:03:23.617485 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 55.71s 2025-09-03 01:03:23.617496 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.40s 2025-09-03 01:03:23.617507 | orchestrator | Ensure the destination directory exists --------------------------------- 0.65s 2025-09-03 01:03:23.617518 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2025-09-03 01:03:23.617529 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.25s 2025-09-03 01:03:23.617539 | orchestrator | 2025-09-03 01:03:23.617550 | orchestrator | 2025-09-03 01:03:23.617561 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 01:03:23.617572 | orchestrator | 2025-09-03 01:03:23.617583 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 01:03:23.617608 | orchestrator | Wednesday 03 September 2025 00:59:35 +0000 (0:00:00.307) 0:00:00.307 *** 2025-09-03 01:03:23.617619 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:03:23.617631 | orchestrator | ok: [testbed-node-1] 2025-09-03 01:03:23.617642 | orchestrator | ok: [testbed-node-2] 2025-09-03 01:03:23.617664 | orchestrator | ok: [testbed-node-3] 2025-09-03 01:03:23.617675 | orchestrator | ok: [testbed-node-4] 2025-09-03 01:03:23.617686 | orchestrator | ok: [testbed-node-5] 2025-09-03 01:03:23.617697 | orchestrator | 2025-09-03 01:03:23.617708 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 01:03:23.617719 | orchestrator | Wednesday 03 September 2025 00:59:36 +0000 (0:00:00.646) 0:00:00.953 *** 2025-09-03 01:03:23.617730 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-03 01:03:23.617742 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-03 01:03:23.617753 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-03 01:03:23.617764 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-03 01:03:23.617775 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-03 01:03:23.617786 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-03 01:03:23.617797 | orchestrator | 2025-09-03 01:03:23.617808 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-03 01:03:23.617819 | orchestrator | 2025-09-03 01:03:23.617830 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-03 01:03:23.617850 | orchestrator | Wednesday 03 September 2025 00:59:36 +0000 (0:00:00.536) 0:00:01.490 *** 2025-09-03 01:03:23.617862 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 01:03:23.617873 | orchestrator | 2025-09-03 01:03:23.617884 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-03 01:03:23.617895 | orchestrator | Wednesday 03 September 2025 00:59:37 +0000 (0:00:01.143) 0:00:02.634 *** 2025-09-03 01:03:23.617906 | orchestrator | ok: [testbed-node-1] 2025-09-03 01:03:23.617940 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:03:23.617952 | orchestrator | ok: [testbed-node-2] 2025-09-03 01:03:23.617963 | orchestrator | ok: [testbed-node-3] 2025-09-03 01:03:23.617974 | orchestrator | ok: [testbed-node-4] 2025-09-03 01:03:23.617985 | orchestrator | ok: [testbed-node-5] 2025-09-03 01:03:23.617996 | orchestrator | 2025-09-03 01:03:23.618007 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-03 01:03:23.618063 | orchestrator | Wednesday 03 September 2025 00:59:38 +0000 (0:00:01.206) 0:00:03.840 *** 2025-09-03 01:03:23.618075 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:03:23.618086 | orchestrator | ok: [testbed-node-1] 2025-09-03 01:03:23.618097 | orchestrator | ok: [testbed-node-2] 2025-09-03 01:03:23.618108 | orchestrator | ok: [testbed-node-3] 2025-09-03 01:03:23.618119 | orchestrator | ok: [testbed-node-4] 2025-09-03 01:03:23.618387 | orchestrator | ok: [testbed-node-5] 2025-09-03 01:03:23.618400 | orchestrator | 2025-09-03 01:03:23.618411 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-03 01:03:23.618422 | orchestrator | Wednesday 03 September 2025 00:59:39 +0000 (0:00:00.996) 0:00:04.837 *** 2025-09-03 01:03:23.618433 | orchestrator | ok: [testbed-node-0] => { 2025-09-03 01:03:23.618444 | orchestrator |  "changed": false, 2025-09-03 01:03:23.618456 | orchestrator |  "msg": "All assertions passed" 2025-09-03 01:03:23.618467 | orchestrator | } 2025-09-03 01:03:23.618478 | orchestrator | ok: [testbed-node-1] => { 2025-09-03 01:03:23.618489 | orchestrator |  "changed": false, 2025-09-03 01:03:23.618500 | orchestrator |  "msg": "All assertions passed" 2025-09-03 01:03:23.618511 | orchestrator | } 2025-09-03 01:03:23.618522 | orchestrator | ok: [testbed-node-2] => { 2025-09-03 01:03:23.618533 | orchestrator |  "changed": false, 2025-09-03 01:03:23.618544 | orchestrator |  "msg": "All assertions passed" 2025-09-03 01:03:23.618555 | orchestrator | } 2025-09-03 01:03:23.618566 | orchestrator | ok: [testbed-node-3] => { 2025-09-03 01:03:23.618577 | orchestrator |  "changed": false, 2025-09-03 01:03:23.618588 | orchestrator |  "msg": "All assertions passed" 2025-09-03 01:03:23.618599 | orchestrator | } 2025-09-03 01:03:23.618610 | orchestrator | ok: [testbed-node-4] => { 2025-09-03 01:03:23.618631 | orchestrator |  "changed": false, 2025-09-03 01:03:23.618642 | orchestrator |  "msg": "All assertions passed" 2025-09-03 01:03:23.618653 | orchestrator | } 2025-09-03 01:03:23.618664 | orchestrator | ok: [testbed-node-5] => { 2025-09-03 01:03:23.618675 | orchestrator |  "changed": false, 2025-09-03 01:03:23.618686 | orchestrator |  "msg": "All assertions passed" 2025-09-03 01:03:23.618697 | orchestrator | } 2025-09-03 01:03:23.618708 | orchestrator | 2025-09-03 01:03:23.618719 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-03 01:03:23.618730 | orchestrator | Wednesday 03 September 2025 00:59:40 +0000 (0:00:00.711) 0:00:05.548 *** 2025-09-03 01:03:23.618741 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.618752 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.618763 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.618774 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.618785 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.618796 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.618807 | orchestrator | 2025-09-03 01:03:23.618818 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-03 01:03:23.618829 | orchestrator | Wednesday 03 September 2025 00:59:41 +0000 (0:00:00.543) 0:00:06.092 *** 2025-09-03 01:03:23.618840 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-03 01:03:23.618851 | orchestrator | 2025-09-03 01:03:23.618862 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-03 01:03:23.618873 | orchestrator | Wednesday 03 September 2025 00:59:44 +0000 (0:00:03.356) 0:00:09.448 *** 2025-09-03 01:03:23.618883 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-03 01:03:23.618895 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-03 01:03:23.618906 | orchestrator | 2025-09-03 01:03:23.618936 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-03 01:03:23.618949 | orchestrator | Wednesday 03 September 2025 00:59:50 +0000 (0:00:06.071) 0:00:15.519 *** 2025-09-03 01:03:23.618960 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-03 01:03:23.618970 | orchestrator | 2025-09-03 01:03:23.618988 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-03 01:03:23.619000 | orchestrator | Wednesday 03 September 2025 00:59:53 +0000 (0:00:03.134) 0:00:18.654 *** 2025-09-03 01:03:23.619012 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-03 01:03:23.619026 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-03 01:03:23.619040 | orchestrator | 2025-09-03 01:03:23.619054 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-03 01:03:23.619067 | orchestrator | Wednesday 03 September 2025 00:59:57 +0000 (0:00:03.781) 0:00:22.436 *** 2025-09-03 01:03:23.619079 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-03 01:03:23.619092 | orchestrator | 2025-09-03 01:03:23.619105 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-03 01:03:23.619118 | orchestrator | Wednesday 03 September 2025 01:00:00 +0000 (0:00:03.263) 0:00:25.699 *** 2025-09-03 01:03:23.619131 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-03 01:03:23.619143 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-03 01:03:23.619156 | orchestrator | 2025-09-03 01:03:23.619168 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-03 01:03:23.619181 | orchestrator | Wednesday 03 September 2025 01:00:08 +0000 (0:00:07.761) 0:00:33.461 *** 2025-09-03 01:03:23.619226 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.619242 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.619255 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.619267 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.619281 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.619302 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.619315 | orchestrator | 2025-09-03 01:03:23.619328 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-03 01:03:23.619340 | orchestrator | Wednesday 03 September 2025 01:00:09 +0000 (0:00:00.699) 0:00:34.161 *** 2025-09-03 01:03:23.619354 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.619367 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.619379 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.619390 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.619401 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.619412 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.619423 | orchestrator | 2025-09-03 01:03:23.619434 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-03 01:03:23.619445 | orchestrator | Wednesday 03 September 2025 01:00:11 +0000 (0:00:01.855) 0:00:36.017 *** 2025-09-03 01:03:23.619456 | orchestrator | ok: [testbed-node-1] 2025-09-03 01:03:23.619467 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:03:23.619478 | orchestrator | ok: [testbed-node-2] 2025-09-03 01:03:23.619489 | orchestrator | ok: [testbed-node-3] 2025-09-03 01:03:23.619500 | orchestrator | ok: [testbed-node-4] 2025-09-03 01:03:23.619511 | orchestrator | ok: [testbed-node-5] 2025-09-03 01:03:23.619522 | orchestrator | 2025-09-03 01:03:23.619533 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-03 01:03:23.619544 | orchestrator | Wednesday 03 September 2025 01:00:12 +0000 (0:00:01.042) 0:00:37.059 *** 2025-09-03 01:03:23.619555 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.619566 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.619577 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.619589 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.619600 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.619611 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.619622 | orchestrator | 2025-09-03 01:03:23.619633 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-03 01:03:23.619643 | orchestrator | Wednesday 03 September 2025 01:00:14 +0000 (0:00:01.978) 0:00:39.038 *** 2025-09-03 01:03:23.619658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-03 01:03:23.619679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-03 01:03:23.619718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-03 01:03:23.619739 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-03 01:03:23.619752 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-03 01:03:23.619764 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-03 01:03:23.619775 | orchestrator | 2025-09-03 01:03:23.619787 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-03 01:03:23.619798 | orchestrator | Wednesday 03 September 2025 01:00:17 +0000 (0:00:03.156) 0:00:42.194 *** 2025-09-03 01:03:23.619809 | orchestrator | [WARNING]: Skipped 2025-09-03 01:03:23.619821 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-03 01:03:23.619832 | orchestrator | due to this access issue: 2025-09-03 01:03:23.619843 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-03 01:03:23.619854 | orchestrator | a directory 2025-09-03 01:03:23.619865 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-03 01:03:23.619876 | orchestrator | 2025-09-03 01:03:23.619887 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-03 01:03:23.619904 | orchestrator | Wednesday 03 September 2025 01:00:18 +0000 (0:00:00.790) 0:00:42.985 *** 2025-09-03 01:03:23.619916 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 01:03:23.619973 | orchestrator | 2025-09-03 01:03:23.619989 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-03 01:03:23.620000 | orchestrator | Wednesday 03 September 2025 01:00:19 +0000 (0:00:01.171) 0:00:44.157 *** 2025-09-03 01:03:23.620012 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-03 01:03:23.620053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-03 01:03:23.620067 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-03 01:03:23.620079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-03 01:03:23.620095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-03 01:03:23.620140 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-03 01:03:23.620154 | orchestrator | 2025-09-03 01:03:23.620165 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-03 01:03:23.620176 | orchestrator | Wednesday 03 September 2025 01:00:22 +0000 (0:00:02.979) 0:00:47.136 *** 2025-09-03 01:03:23.620188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-03 01:03:23.620200 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.620212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-03 01:03:23.620223 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.620235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-03 01:03:23.620253 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.620270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 01:03:23.620283 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.620321 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 01:03:23.620334 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.620346 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 01:03:23.620357 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.620368 | orchestrator | 2025-09-03 01:03:23.620379 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-03 01:03:23.620390 | orchestrator | Wednesday 03 September 2025 01:00:24 +0000 (0:00:02.433) 0:00:49.570 *** 2025-09-03 01:03:23.620401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-03 01:03:23.620425 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.620442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-03 01:03:23.620453 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.620490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-03 01:03:23.620502 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.620514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 01:03:23.620525 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.620536 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 01:03:23.620548 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.620559 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 01:03:23.620660 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.620678 | orchestrator | 2025-09-03 01:03:23.620689 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-03 01:03:23.620700 | orchestrator | Wednesday 03 September 2025 01:00:27 +0000 (0:00:02.808) 0:00:52.378 *** 2025-09-03 01:03:23.620711 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.620722 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.620733 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.620744 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.620755 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.620766 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.620777 | orchestrator | 2025-09-03 01:03:23.620787 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-03 01:03:23.620798 | orchestrator | Wednesday 03 September 2025 01:00:29 +0000 (0:00:01.557) 0:00:53.936 *** 2025-09-03 01:03:23.620809 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.620820 | orchestrator | 2025-09-03 01:03:23.620831 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-03 01:03:23.620848 | orchestrator | Wednesday 03 September 2025 01:00:29 +0000 (0:00:00.103) 0:00:54.040 *** 2025-09-03 01:03:23.620859 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.620870 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.620881 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.620891 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.620902 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.620913 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.620945 | orchestrator | 2025-09-03 01:03:23.620956 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-03 01:03:23.620967 | orchestrator | Wednesday 03 September 2025 01:00:29 +0000 (0:00:00.564) 0:00:54.604 *** 2025-09-03 01:03:23.621008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'tes2025-09-03 01:03:23 | INFO  | Task 2ce95322-778b-4dfc-ac97-427d82cda0df is in state SUCCESS 2025-09-03 01:03:23.621024 | orchestrator | t': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-03 01:03:23.621037 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.621048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-03 01:03:23.621068 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.621080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-03 01:03:23.621091 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.621102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 01:03:23.621119 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.621130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 01:03:23.621142 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.621163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 01:03:23.621182 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.621193 | orchestrator | 2025-09-03 01:03:23.621204 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-03 01:03:23.621215 | orchestrator | Wednesday 03 September 2025 01:00:31 +0000 (0:00:01.670) 0:00:56.275 *** 2025-09-03 01:03:23.621227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-03 01:03:23.621239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-03 01:03:23.621256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-03 01:03:23.621274 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-03 01:03:23.621286 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-03 01:03:23.621305 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-03 01:03:23.621319 | orchestrator | 2025-09-03 01:03:23.621332 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-03 01:03:23.621345 | orchestrator | Wednesday 03 September 2025 01:00:34 +0000 (0:00:02.819) 0:00:59.094 *** 2025-09-03 01:03:23.621360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-03 01:03:23.621380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-03 01:03:23.621402 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-03 01:03:23.621423 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-03 01:03:23.621437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-03 01:03:23.621450 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-03 01:03:23.621463 | orchestrator | 2025-09-03 01:03:23.621476 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-03 01:03:23.621489 | orchestrator | Wednesday 03 September 2025 01:00:39 +0000 (0:00:05.164) 0:01:04.259 *** 2025-09-03 01:03:23.621503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-03 01:03:23.621517 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.621538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-03 01:03:23.621559 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.621572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 01:03:23.621586 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.621600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-03 01:03:23.621613 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.621658 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 01:03:23.621671 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.621687 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 01:03:23.621705 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.621717 | orchestrator | 2025-09-03 01:03:23.621728 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-03 01:03:23.621739 | orchestrator | Wednesday 03 September 2025 01:00:42 +0000 (0:00:02.712) 0:01:06.971 *** 2025-09-03 01:03:23.621750 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.621762 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.621773 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.621790 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:03:23.621802 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:03:23.621813 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:03:23.621824 | orchestrator | 2025-09-03 01:03:23.621835 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-03 01:03:23.621846 | orchestrator | Wednesday 03 September 2025 01:00:45 +0000 (0:00:03.470) 0:01:10.442 *** 2025-09-03 01:03:23.621857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 01:03:23.621869 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.621881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 01:03:23.621892 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.621904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 01:03:23.621916 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.621994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-03 01:03:23.622077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-03 01:03:23.622094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-03 01:03:23.622106 | orchestrator | 2025-09-03 01:03:23.622117 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-03 01:03:23.622127 | orchestrator | Wednesday 03 September 2025 01:00:49 +0000 (0:00:04.315) 0:01:14.758 *** 2025-09-03 01:03:23.622137 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.622146 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.622156 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.622166 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.622176 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.622186 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.622196 | orchestrator | 2025-09-03 01:03:23.622206 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-03 01:03:23.622216 | orchestrator | Wednesday 03 September 2025 01:00:52 +0000 (0:00:02.628) 0:01:17.386 *** 2025-09-03 01:03:23.622226 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.622235 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.622245 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.622255 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.622265 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.622275 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.622285 | orchestrator | 2025-09-03 01:03:23.622295 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-03 01:03:23.622305 | orchestrator | Wednesday 03 September 2025 01:00:54 +0000 (0:00:02.132) 0:01:19.519 *** 2025-09-03 01:03:23.622314 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.622324 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.622334 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.622344 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.622354 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.622363 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.622380 | orchestrator | 2025-09-03 01:03:23.622390 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-03 01:03:23.622400 | orchestrator | Wednesday 03 September 2025 01:00:56 +0000 (0:00:02.373) 0:01:21.893 *** 2025-09-03 01:03:23.622410 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.622420 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.622429 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.622439 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.622449 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.622459 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.622468 | orchestrator | 2025-09-03 01:03:23.622478 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-03 01:03:23.622488 | orchestrator | Wednesday 03 September 2025 01:00:59 +0000 (0:00:02.838) 0:01:24.731 *** 2025-09-03 01:03:23.622498 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.622508 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.622517 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.622527 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.622537 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.622547 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.622557 | orchestrator | 2025-09-03 01:03:23.622571 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-03 01:03:23.622581 | orchestrator | Wednesday 03 September 2025 01:01:02 +0000 (0:00:02.336) 0:01:27.068 *** 2025-09-03 01:03:23.622591 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.622601 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.622611 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.622621 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.622630 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.622640 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.622650 | orchestrator | 2025-09-03 01:03:23.622660 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-03 01:03:23.622669 | orchestrator | Wednesday 03 September 2025 01:01:04 +0000 (0:00:01.951) 0:01:29.020 *** 2025-09-03 01:03:23.622679 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-03 01:03:23.622689 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.622699 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-03 01:03:23.622709 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.622720 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-03 01:03:23.622735 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.622745 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-03 01:03:23.622755 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.622765 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-03 01:03:23.622775 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.622785 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-03 01:03:23.622795 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.622805 | orchestrator | 2025-09-03 01:03:23.622815 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-03 01:03:23.622824 | orchestrator | Wednesday 03 September 2025 01:01:06 +0000 (0:00:02.047) 0:01:31.067 *** 2025-09-03 01:03:23.622835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-03 01:03:23.622850 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.622861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-03 01:03:23.622871 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.622885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-03 01:03:23.622896 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.622911 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 01:03:23.622937 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.622947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 01:03:23.622964 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.622974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 01:03:23.622984 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.622994 | orchestrator | 2025-09-03 01:03:23.623004 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-03 01:03:23.623014 | orchestrator | Wednesday 03 September 2025 01:01:08 +0000 (0:00:01.890) 0:01:32.957 *** 2025-09-03 01:03:23.623024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-03 01:03:23.623034 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.623049 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 01:03:23.623059 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.623075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-03 01:03:23.623086 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.623096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-03 01:03:23.623113 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.623123 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 01:03:23.623133 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.623143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 01:03:23.623153 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.623163 | orchestrator | 2025-09-03 01:03:23.623173 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-03 01:03:23.623183 | orchestrator | Wednesday 03 September 2025 01:01:10 +0000 (0:00:02.164) 0:01:35.122 *** 2025-09-03 01:03:23.623193 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.623203 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.623213 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.623223 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.623237 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.623247 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.623257 | orchestrator | 2025-09-03 01:03:23.623267 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-03 01:03:23.623277 | orchestrator | Wednesday 03 September 2025 01:01:11 +0000 (0:00:01.728) 0:01:36.850 *** 2025-09-03 01:03:23.623287 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.623296 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.623306 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.623316 | orchestrator | changed: [testbed-node-5] 2025-09-03 01:03:23.623326 | orchestrator | changed: [testbed-node-3] 2025-09-03 01:03:23.623336 | orchestrator | changed: [testbed-node-4] 2025-09-03 01:03:23.623346 | orchestrator | 2025-09-03 01:03:23.623355 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-03 01:03:23.623365 | orchestrator | Wednesday 03 September 2025 01:01:15 +0000 (0:00:03.663) 0:01:40.514 *** 2025-09-03 01:03:23.623381 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.623391 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.623401 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.623411 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.623420 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.623430 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.623440 | orchestrator | 2025-09-03 01:03:23.623450 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-03 01:03:23.623472 | orchestrator | Wednesday 03 September 2025 01:01:18 +0000 (0:00:02.521) 0:01:43.036 *** 2025-09-03 01:03:23.623482 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.623492 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.623502 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.623512 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.623522 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.623532 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.623541 | orchestrator | 2025-09-03 01:03:23.623551 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-03 01:03:23.623561 | orchestrator | Wednesday 03 September 2025 01:01:20 +0000 (0:00:02.683) 0:01:45.719 *** 2025-09-03 01:03:23.623571 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.623581 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.623591 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.623601 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.623610 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.623620 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.623630 | orchestrator | 2025-09-03 01:03:23.623640 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-03 01:03:23.623650 | orchestrator | Wednesday 03 September 2025 01:01:23 +0000 (0:00:02.219) 0:01:47.938 *** 2025-09-03 01:03:23.623660 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.623669 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.623679 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.623689 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.623699 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.623709 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.623719 | orchestrator | 2025-09-03 01:03:23.623728 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-03 01:03:23.623738 | orchestrator | Wednesday 03 September 2025 01:01:25 +0000 (0:00:02.021) 0:01:49.961 *** 2025-09-03 01:03:23.623748 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.623758 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.623768 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.623777 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.623787 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.623797 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.623807 | orchestrator | 2025-09-03 01:03:23.623817 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-03 01:03:23.623827 | orchestrator | Wednesday 03 September 2025 01:01:27 +0000 (0:00:02.319) 0:01:52.280 *** 2025-09-03 01:03:23.623837 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.623846 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.623856 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.623866 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.623876 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.623886 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.623895 | orchestrator | 2025-09-03 01:03:23.623905 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-03 01:03:23.623915 | orchestrator | Wednesday 03 September 2025 01:01:30 +0000 (0:00:03.102) 0:01:55.383 *** 2025-09-03 01:03:23.623939 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.623948 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.623964 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.623974 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.623984 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.623994 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.624003 | orchestrator | 2025-09-03 01:03:23.624013 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-03 01:03:23.624023 | orchestrator | Wednesday 03 September 2025 01:01:32 +0000 (0:00:02.255) 0:01:57.638 *** 2025-09-03 01:03:23.624033 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-03 01:03:23.624043 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.624053 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-03 01:03:23.624063 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.624073 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-03 01:03:23.624083 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-03 01:03:23.624093 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.624103 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.624113 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-03 01:03:23.624123 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.624137 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-03 01:03:23.624147 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.624157 | orchestrator | 2025-09-03 01:03:23.624167 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-03 01:03:23.624177 | orchestrator | Wednesday 03 September 2025 01:01:35 +0000 (0:00:02.784) 0:02:00.422 *** 2025-09-03 01:03:23.624193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-03 01:03:23.624204 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.624214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-03 01:03:23.624224 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.624235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 01:03:23.624250 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.624261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-03 01:03:23.624271 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.624285 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 01:03:23.624295 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.624311 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-03 01:03:23.624322 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.624332 | orchestrator | 2025-09-03 01:03:23.624342 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-03 01:03:23.624352 | orchestrator | Wednesday 03 September 2025 01:01:38 +0000 (0:00:02.814) 0:02:03.237 *** 2025-09-03 01:03:23.624362 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-03 01:03:23.624384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-03 01:03:23.624394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-03 01:03:23.624409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-03 01:03:23.624425 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-03 01:03:23.624436 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-03 01:03:23.624452 | orchestrator | 2025-09-03 01:03:23.624462 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-03 01:03:23.624472 | orchestrator | Wednesday 03 September 2025 01:01:42 +0000 (0:00:04.076) 0:02:07.313 *** 2025-09-03 01:03:23.624482 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:23.624492 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:23.624502 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:23.624512 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:03:23.624522 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:03:23.624532 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:03:23.624541 | orchestrator | 2025-09-03 01:03:23.624551 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-03 01:03:23.624561 | orchestrator | Wednesday 03 September 2025 01:01:42 +0000 (0:00:00.443) 0:02:07.756 *** 2025-09-03 01:03:23.624571 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:03:23.624581 | orchestrator | 2025-09-03 01:03:23.624591 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-03 01:03:23.624601 | orchestrator | Wednesday 03 September 2025 01:01:44 +0000 (0:00:02.094) 0:02:09.851 *** 2025-09-03 01:03:23.624610 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:03:23.624620 | orchestrator | 2025-09-03 01:03:23.624630 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-03 01:03:23.624640 | orchestrator | Wednesday 03 September 2025 01:01:47 +0000 (0:00:02.242) 0:02:12.094 *** 2025-09-03 01:03:23.624650 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:03:23.624659 | orchestrator | 2025-09-03 01:03:23.624669 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-03 01:03:23.624679 | orchestrator | Wednesday 03 September 2025 01:02:31 +0000 (0:00:43.981) 0:02:56.076 *** 2025-09-03 01:03:23.624689 | orchestrator | 2025-09-03 01:03:23.624699 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-03 01:03:23.624709 | orchestrator | Wednesday 03 September 2025 01:02:31 +0000 (0:00:00.114) 0:02:56.190 *** 2025-09-03 01:03:23.624718 | orchestrator | 2025-09-03 01:03:23.624728 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-03 01:03:23.624738 | orchestrator | Wednesday 03 September 2025 01:02:31 +0000 (0:00:00.266) 0:02:56.456 *** 2025-09-03 01:03:23.624748 | orchestrator | 2025-09-03 01:03:23.624757 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-03 01:03:23.624767 | orchestrator | Wednesday 03 September 2025 01:02:31 +0000 (0:00:00.108) 0:02:56.565 *** 2025-09-03 01:03:23.624777 | orchestrator | 2025-09-03 01:03:23.624787 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-03 01:03:23.624801 | orchestrator | Wednesday 03 September 2025 01:02:31 +0000 (0:00:00.061) 0:02:56.627 *** 2025-09-03 01:03:23.624810 | orchestrator | 2025-09-03 01:03:23.624820 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-03 01:03:23.624830 | orchestrator | Wednesday 03 September 2025 01:02:31 +0000 (0:00:00.060) 0:02:56.687 *** 2025-09-03 01:03:23.624840 | orchestrator | 2025-09-03 01:03:23.624849 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-03 01:03:23.624859 | orchestrator | Wednesday 03 September 2025 01:02:31 +0000 (0:00:00.056) 0:02:56.743 *** 2025-09-03 01:03:23.624869 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:03:23.624879 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:03:23.624889 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:03:23.624899 | orchestrator | 2025-09-03 01:03:23.624915 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-03 01:03:23.624967 | orchestrator | Wednesday 03 September 2025 01:02:58 +0000 (0:00:26.608) 0:03:23.352 *** 2025-09-03 01:03:23.624978 | orchestrator | changed: [testbed-node-4] 2025-09-03 01:03:23.624988 | orchestrator | changed: [testbed-node-3] 2025-09-03 01:03:23.624998 | orchestrator | changed: [testbed-node-5] 2025-09-03 01:03:23.625007 | orchestrator | 2025-09-03 01:03:23.625017 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 01:03:23.625033 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-03 01:03:23.625042 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-03 01:03:23.625050 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-03 01:03:23.625058 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-03 01:03:23.625066 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-03 01:03:23.625074 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-03 01:03:23.625082 | orchestrator | 2025-09-03 01:03:23.625090 | orchestrator | 2025-09-03 01:03:23.625098 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 01:03:23.625106 | orchestrator | Wednesday 03 September 2025 01:03:23 +0000 (0:00:24.606) 0:03:47.959 *** 2025-09-03 01:03:23.625114 | orchestrator | =============================================================================== 2025-09-03 01:03:23.625122 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 43.98s 2025-09-03 01:03:23.625130 | orchestrator | neutron : Restart neutron-server container ----------------------------- 26.61s 2025-09-03 01:03:23.625138 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 24.61s 2025-09-03 01:03:23.625146 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.76s 2025-09-03 01:03:23.625154 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.07s 2025-09-03 01:03:23.625162 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.16s 2025-09-03 01:03:23.625170 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.32s 2025-09-03 01:03:23.625178 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.08s 2025-09-03 01:03:23.625186 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.78s 2025-09-03 01:03:23.625193 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.66s 2025-09-03 01:03:23.625201 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.47s 2025-09-03 01:03:23.625209 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.36s 2025-09-03 01:03:23.625217 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.26s 2025-09-03 01:03:23.625225 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.16s 2025-09-03 01:03:23.625233 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.13s 2025-09-03 01:03:23.625241 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 3.10s 2025-09-03 01:03:23.625249 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 2.98s 2025-09-03 01:03:23.625257 | orchestrator | neutron : Copying over mlnx_agent.ini ----------------------------------- 2.84s 2025-09-03 01:03:23.625265 | orchestrator | neutron : Copying over config.json files for services ------------------- 2.82s 2025-09-03 01:03:23.625279 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 2.81s 2025-09-03 01:03:23.625287 | orchestrator | 2025-09-03 01:03:23 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:03:23.625295 | orchestrator | 2025-09-03 01:03:23 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:03:26.680501 | orchestrator | 2025-09-03 01:03:26 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:03:26.682106 | orchestrator | 2025-09-03 01:03:26 | INFO  | Task 88d17896-3116-442f-be7a-a99034bbb9d9 is in state STARTED 2025-09-03 01:03:26.684228 | orchestrator | 2025-09-03 01:03:26 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:03:26.685571 | orchestrator | 2025-09-03 01:03:26 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:03:26.685598 | orchestrator | 2025-09-03 01:03:26 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:03:29.724231 | orchestrator | 2025-09-03 01:03:29 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:03:29.725527 | orchestrator | 2025-09-03 01:03:29 | INFO  | Task 88d17896-3116-442f-be7a-a99034bbb9d9 is in state STARTED 2025-09-03 01:03:29.727575 | orchestrator | 2025-09-03 01:03:29 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:03:29.729289 | orchestrator | 2025-09-03 01:03:29 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:03:29.729315 | orchestrator | 2025-09-03 01:03:29 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:03:32.775515 | orchestrator | 2025-09-03 01:03:32 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:03:32.777475 | orchestrator | 2025-09-03 01:03:32 | INFO  | Task 88d17896-3116-442f-be7a-a99034bbb9d9 is in state STARTED 2025-09-03 01:03:32.778985 | orchestrator | 2025-09-03 01:03:32 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:03:32.781323 | orchestrator | 2025-09-03 01:03:32 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:03:32.781353 | orchestrator | 2025-09-03 01:03:32 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:03:35.826186 | orchestrator | 2025-09-03 01:03:35 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:03:35.826971 | orchestrator | 2025-09-03 01:03:35 | INFO  | Task 88d17896-3116-442f-be7a-a99034bbb9d9 is in state STARTED 2025-09-03 01:03:35.829025 | orchestrator | 2025-09-03 01:03:35 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:03:35.830954 | orchestrator | 2025-09-03 01:03:35 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:03:35.831015 | orchestrator | 2025-09-03 01:03:35 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:03:38.869870 | orchestrator | 2025-09-03 01:03:38 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:03:38.870819 | orchestrator | 2025-09-03 01:03:38 | INFO  | Task 88d17896-3116-442f-be7a-a99034bbb9d9 is in state STARTED 2025-09-03 01:03:38.870860 | orchestrator | 2025-09-03 01:03:38 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:03:38.871539 | orchestrator | 2025-09-03 01:03:38 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state STARTED 2025-09-03 01:03:38.871565 | orchestrator | 2025-09-03 01:03:38 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:03:41.903588 | orchestrator | 2025-09-03 01:03:41 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:03:41.904369 | orchestrator | 2025-09-03 01:03:41 | INFO  | Task 88d17896-3116-442f-be7a-a99034bbb9d9 is in state STARTED 2025-09-03 01:03:41.906324 | orchestrator | 2025-09-03 01:03:41 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:03:41.908148 | orchestrator | 2025-09-03 01:03:41 | INFO  | Task 26b38d1d-0318-4371-9dc3-415f6f53dccc is in state SUCCESS 2025-09-03 01:03:41.909760 | orchestrator | 2025-09-03 01:03:41.909798 | orchestrator | 2025-09-03 01:03:41.909811 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 01:03:41.909823 | orchestrator | 2025-09-03 01:03:41.909835 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 01:03:41.909847 | orchestrator | Wednesday 03 September 2025 01:00:42 +0000 (0:00:00.471) 0:00:00.471 *** 2025-09-03 01:03:41.909858 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:03:41.909886 | orchestrator | ok: [testbed-node-1] 2025-09-03 01:03:41.909898 | orchestrator | ok: [testbed-node-2] 2025-09-03 01:03:41.909909 | orchestrator | 2025-09-03 01:03:41.910744 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 01:03:41.910766 | orchestrator | Wednesday 03 September 2025 01:00:43 +0000 (0:00:00.774) 0:00:01.246 *** 2025-09-03 01:03:41.910778 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-03 01:03:41.910803 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-03 01:03:41.910814 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-03 01:03:41.910825 | orchestrator | 2025-09-03 01:03:41.910836 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-03 01:03:41.910847 | orchestrator | 2025-09-03 01:03:41.911915 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-03 01:03:41.911992 | orchestrator | Wednesday 03 September 2025 01:00:43 +0000 (0:00:00.792) 0:00:02.038 *** 2025-09-03 01:03:41.912021 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 01:03:41.912033 | orchestrator | 2025-09-03 01:03:41.912045 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-03 01:03:41.912056 | orchestrator | Wednesday 03 September 2025 01:00:44 +0000 (0:00:00.864) 0:00:02.903 *** 2025-09-03 01:03:41.912066 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-03 01:03:41.912077 | orchestrator | 2025-09-03 01:03:41.912088 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-03 01:03:41.912099 | orchestrator | Wednesday 03 September 2025 01:00:48 +0000 (0:00:03.527) 0:00:06.431 *** 2025-09-03 01:03:41.912110 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-03 01:03:41.912121 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-03 01:03:41.912132 | orchestrator | 2025-09-03 01:03:41.912190 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-03 01:03:41.912229 | orchestrator | Wednesday 03 September 2025 01:00:54 +0000 (0:00:06.366) 0:00:12.798 *** 2025-09-03 01:03:41.912240 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-03 01:03:41.912252 | orchestrator | 2025-09-03 01:03:41.912263 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-03 01:03:41.912274 | orchestrator | Wednesday 03 September 2025 01:00:57 +0000 (0:00:03.203) 0:00:16.001 *** 2025-09-03 01:03:41.912285 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-03 01:03:41.912296 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-03 01:03:41.912306 | orchestrator | 2025-09-03 01:03:41.912317 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-03 01:03:41.912328 | orchestrator | Wednesday 03 September 2025 01:01:01 +0000 (0:00:03.709) 0:00:19.711 *** 2025-09-03 01:03:41.912356 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-03 01:03:41.912368 | orchestrator | 2025-09-03 01:03:41.912379 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-03 01:03:41.912390 | orchestrator | Wednesday 03 September 2025 01:01:04 +0000 (0:00:03.093) 0:00:22.805 *** 2025-09-03 01:03:41.912401 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-03 01:03:41.912412 | orchestrator | 2025-09-03 01:03:41.912423 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-03 01:03:41.912434 | orchestrator | Wednesday 03 September 2025 01:01:08 +0000 (0:00:03.992) 0:00:26.797 *** 2025-09-03 01:03:41.912449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-03 01:03:41.912522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-03 01:03:41.912547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-03 01:03:41.912562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-03 01:03:41.912577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-03 01:03:41.912600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-03 01:03:41.912614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.912663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.912678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.912698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.912713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.912734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.912747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.912761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.912808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.912823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.912842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.912857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.912878 | orchestrator | 2025-09-03 01:03:41.912890 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-03 01:03:41.912901 | orchestrator | Wednesday 03 September 2025 01:01:11 +0000 (0:00:03.195) 0:00:29.993 *** 2025-09-03 01:03:41.912912 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:41.912994 | orchestrator | 2025-09-03 01:03:41.913006 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-03 01:03:41.913017 | orchestrator | Wednesday 03 September 2025 01:01:12 +0000 (0:00:00.092) 0:00:30.085 *** 2025-09-03 01:03:41.913028 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:41.913039 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:41.913050 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:41.913062 | orchestrator | 2025-09-03 01:03:41.913077 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-03 01:03:41.913096 | orchestrator | Wednesday 03 September 2025 01:01:12 +0000 (0:00:00.210) 0:00:30.296 *** 2025-09-03 01:03:41.913115 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 01:03:41.913135 | orchestrator | 2025-09-03 01:03:41.913152 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-03 01:03:41.913170 | orchestrator | Wednesday 03 September 2025 01:01:12 +0000 (0:00:00.687) 0:00:30.983 *** 2025-09-03 01:03:41.913190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-03 01:03:41.913289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-03 01:03:41.913324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-03 01:03:41.913358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-03 01:03:41.913379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-03 01:03:41.913398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-03 01:03:41.913420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.913510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.913541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.913578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.913597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.913613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.913631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.913692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.913712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.913729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.913752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.913763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.913773 | orchestrator | 2025-09-03 01:03:41.913783 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-03 01:03:41.913792 | orchestrator | Wednesday 03 September 2025 01:01:19 +0000 (0:00:06.444) 0:00:37.428 *** 2025-09-03 01:03:41.913803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-03 01:03:41.913813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-03 01:03:41.913852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.913875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.913886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.913897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.913907 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:41.913942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-03 01:03:41.913954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-03 01:03:41.913991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.914046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.914060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.914070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.914080 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:41.914091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-03 01:03:41.914101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-03 01:03:41.914142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.914162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.914177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.914187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.914197 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:41.914207 | orchestrator | 2025-09-03 01:03:41.914217 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-03 01:03:41.914227 | orchestrator | Wednesday 03 September 2025 01:01:20 +0000 (0:00:01.081) 0:00:38.509 *** 2025-09-03 01:03:41.914238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-03 01:03:41.914249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-03 01:03:41.914283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.914302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.914317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.914327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.914338 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:41.914348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-03 01:03:41.914359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-03 01:03:41.914393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.914411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.914426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.914437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.914447 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:41.914457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-03 01:03:41.914468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-03 01:03:41.914485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.914520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.914536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.914547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.914557 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:41.914567 | orchestrator | 2025-09-03 01:03:41.914577 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-03 01:03:41.914587 | orchestrator | Wednesday 03 September 2025 01:01:21 +0000 (0:00:01.328) 0:00:39.837 *** 2025-09-03 01:03:41.914597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-03 01:03:41.914608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-03 01:03:41.914651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-03 01:03:41.914667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-03 01:03:41.914678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-03 01:03:41.914689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-03 01:03:41.914699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.914710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.914754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.914766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.914781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.914792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.914802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.914813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.914829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.914867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.914879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.914893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.914904 | orchestrator | 2025-09-03 01:03:41.914914 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-03 01:03:41.914941 | orchestrator | Wednesday 03 September 2025 01:01:28 +0000 (0:00:06.285) 0:00:46.123 *** 2025-09-03 01:03:41.914951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-03 01:03:41.914962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-03 01:03:41.914979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-03 01:03:41.914995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915190 | orchestrator | 2025-09-03 01:03:41.915200 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-03 01:03:41.915210 | orchestrator | Wednesday 03 September 2025 01:01:47 +0000 (0:00:19.207) 0:01:05.330 *** 2025-09-03 01:03:41.915224 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-03 01:03:41.915234 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-03 01:03:41.915244 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-03 01:03:41.915254 | orchestrator | 2025-09-03 01:03:41.915264 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-03 01:03:41.915274 | orchestrator | Wednesday 03 September 2025 01:01:51 +0000 (0:00:04.716) 0:01:10.047 *** 2025-09-03 01:03:41.915283 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-03 01:03:41.915293 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-03 01:03:41.915303 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-03 01:03:41.915312 | orchestrator | 2025-09-03 01:03:41.915322 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-03 01:03:41.915338 | orchestrator | Wednesday 03 September 2025 01:01:54 +0000 (0:00:02.352) 0:01:12.399 *** 2025-09-03 01:03:41.915348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-03 01:03:41.915359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-03 01:03:41.915375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-03 01:03:41.915387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.915418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.915429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.915439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.915465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.915480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.915491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.915517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.915528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.915538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915588 | orchestrator | 2025-09-03 01:03:41.915598 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-03 01:03:41.915608 | orchestrator | Wednesday 03 September 2025 01:01:56 +0000 (0:00:02.409) 0:01:14.809 *** 2025-09-03 01:03:41.915626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-03 01:03:41.915645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-03 01:03:41.915663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-03 01:03:41.915689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.915742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.915758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.915777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.915811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.915838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.915867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.915909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.915940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.915951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.915994 | orchestrator | 2025-09-03 01:03:41.916004 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-03 01:03:41.916014 | orchestrator | Wednesday 03 September 2025 01:01:59 +0000 (0:00:02.684) 0:01:17.493 *** 2025-09-03 01:03:41.916024 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:41.916035 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:41.916045 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:41.916054 | orchestrator | 2025-09-03 01:03:41.916069 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-03 01:03:41.916079 | orchestrator | Wednesday 03 September 2025 01:01:59 +0000 (0:00:00.298) 0:01:17.792 *** 2025-09-03 01:03:41.916089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-03 01:03:41.916099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-03 01:03:41.916109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.916120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.916137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.916153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.916163 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:41.916178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-03 01:03:41.916188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-03 01:03:41.916199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.916209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.916225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.916241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.916251 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:41.916266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-03 01:03:41.916277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-03 01:03:41.916287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.916297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.916308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.916328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:03:41.916339 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:41.916349 | orchestrator | 2025-09-03 01:03:41.916359 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-03 01:03:41.916369 | orchestrator | Wednesday 03 September 2025 01:02:00 +0000 (0:00:01.254) 0:01:19.046 *** 2025-09-03 01:03:41.916383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-03 01:03:41.916394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-03 01:03:41.916405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-03 01:03:41.916415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-03 01:03:41.916437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-03 01:03:41.916455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-03 01:03:41.916466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.916476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.916486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.916497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.916517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.916528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.916543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.916554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.916564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.916574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.916584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.916605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:03:41.916615 | orchestrator | 2025-09-03 01:03:41.916625 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-03 01:03:41.916635 | orchestrator | Wednesday 03 September 2025 01:02:05 +0000 (0:00:04.540) 0:01:23.587 *** 2025-09-03 01:03:41.916645 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:03:41.916655 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:03:41.916665 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:03:41.916675 | orchestrator | 2025-09-03 01:03:41.916684 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-03 01:03:41.916694 | orchestrator | Wednesday 03 September 2025 01:02:05 +0000 (0:00:00.267) 0:01:23.854 *** 2025-09-03 01:03:41.916704 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-03 01:03:41.916714 | orchestrator | 2025-09-03 01:03:41.916724 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-03 01:03:41.916733 | orchestrator | Wednesday 03 September 2025 01:02:07 +0000 (0:00:02.084) 0:01:25.939 *** 2025-09-03 01:03:41.916743 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-03 01:03:41.916753 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-03 01:03:41.916763 | orchestrator | 2025-09-03 01:03:41.916777 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-03 01:03:41.916787 | orchestrator | Wednesday 03 September 2025 01:02:10 +0000 (0:00:02.240) 0:01:28.180 *** 2025-09-03 01:03:41.916796 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:03:41.916806 | orchestrator | 2025-09-03 01:03:41.916816 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-03 01:03:41.916825 | orchestrator | Wednesday 03 September 2025 01:02:27 +0000 (0:00:17.187) 0:01:45.368 *** 2025-09-03 01:03:41.916835 | orchestrator | 2025-09-03 01:03:41.916845 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-03 01:03:41.916855 | orchestrator | Wednesday 03 September 2025 01:02:27 +0000 (0:00:00.665) 0:01:46.034 *** 2025-09-03 01:03:41.916864 | orchestrator | 2025-09-03 01:03:41.916874 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-03 01:03:41.916884 | orchestrator | Wednesday 03 September 2025 01:02:28 +0000 (0:00:00.147) 0:01:46.181 *** 2025-09-03 01:03:41.916894 | orchestrator | 2025-09-03 01:03:41.916903 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-03 01:03:41.916913 | orchestrator | Wednesday 03 September 2025 01:02:28 +0000 (0:00:00.100) 0:01:46.282 *** 2025-09-03 01:03:41.916973 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:03:41.916984 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:03:41.916994 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:03:41.917004 | orchestrator | 2025-09-03 01:03:41.917014 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-03 01:03:41.917024 | orchestrator | Wednesday 03 September 2025 01:02:38 +0000 (0:00:10.284) 0:01:56.567 *** 2025-09-03 01:03:41.917040 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:03:41.917051 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:03:41.917061 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:03:41.917070 | orchestrator | 2025-09-03 01:03:41.917080 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-03 01:03:41.917090 | orchestrator | Wednesday 03 September 2025 01:02:49 +0000 (0:00:10.833) 0:02:07.400 *** 2025-09-03 01:03:41.917100 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:03:41.917110 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:03:41.917120 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:03:41.917129 | orchestrator | 2025-09-03 01:03:41.917139 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-03 01:03:41.917149 | orchestrator | Wednesday 03 September 2025 01:03:00 +0000 (0:00:11.375) 0:02:18.776 *** 2025-09-03 01:03:41.917159 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:03:41.917170 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:03:41.917188 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:03:41.917203 | orchestrator | 2025-09-03 01:03:41.917220 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-03 01:03:41.917235 | orchestrator | Wednesday 03 September 2025 01:03:10 +0000 (0:00:09.453) 0:02:28.230 *** 2025-09-03 01:03:41.917251 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:03:41.917267 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:03:41.917284 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:03:41.917300 | orchestrator | 2025-09-03 01:03:41.917311 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-03 01:03:41.917320 | orchestrator | Wednesday 03 September 2025 01:03:20 +0000 (0:00:10.608) 0:02:38.839 *** 2025-09-03 01:03:41.917330 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:03:41.917340 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:03:41.917350 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:03:41.917360 | orchestrator | 2025-09-03 01:03:41.917370 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-03 01:03:41.917379 | orchestrator | Wednesday 03 September 2025 01:03:32 +0000 (0:00:11.412) 0:02:50.251 *** 2025-09-03 01:03:41.917389 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:03:41.917399 | orchestrator | 2025-09-03 01:03:41.917409 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 01:03:41.917418 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-03 01:03:41.917427 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-03 01:03:41.917435 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-03 01:03:41.917443 | orchestrator | 2025-09-03 01:03:41.917451 | orchestrator | 2025-09-03 01:03:41.917465 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 01:03:41.917474 | orchestrator | Wednesday 03 September 2025 01:03:38 +0000 (0:00:06.819) 0:02:57.070 *** 2025-09-03 01:03:41.917482 | orchestrator | =============================================================================== 2025-09-03 01:03:41.917489 | orchestrator | designate : Copying over designate.conf -------------------------------- 19.21s 2025-09-03 01:03:41.917497 | orchestrator | designate : Running Designate bootstrap container ---------------------- 17.19s 2025-09-03 01:03:41.917505 | orchestrator | designate : Restart designate-worker container ------------------------- 11.41s 2025-09-03 01:03:41.917513 | orchestrator | designate : Restart designate-central container ------------------------ 11.38s 2025-09-03 01:03:41.917521 | orchestrator | designate : Restart designate-api container ---------------------------- 10.83s 2025-09-03 01:03:41.917529 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.61s 2025-09-03 01:03:41.917543 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 10.29s 2025-09-03 01:03:41.917551 | orchestrator | designate : Restart designate-producer container ------------------------ 9.45s 2025-09-03 01:03:41.917559 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.82s 2025-09-03 01:03:41.917567 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.44s 2025-09-03 01:03:41.917580 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.37s 2025-09-03 01:03:41.917588 | orchestrator | designate : Copying over config.json files for services ----------------- 6.29s 2025-09-03 01:03:41.917596 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.72s 2025-09-03 01:03:41.917604 | orchestrator | designate : Check designate containers ---------------------------------- 4.54s 2025-09-03 01:03:41.917612 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.99s 2025-09-03 01:03:41.917620 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.71s 2025-09-03 01:03:41.917628 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.53s 2025-09-03 01:03:41.917636 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.20s 2025-09-03 01:03:41.917644 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.20s 2025-09-03 01:03:41.917652 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.09s 2025-09-03 01:03:41.917659 | orchestrator | 2025-09-03 01:03:41 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:03:41.917668 | orchestrator | 2025-09-03 01:03:41 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:03:44.933413 | orchestrator | 2025-09-03 01:03:44 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:03:44.937030 | orchestrator | 2025-09-03 01:03:44 | INFO  | Task 88d17896-3116-442f-be7a-a99034bbb9d9 is in state STARTED 2025-09-03 01:03:44.937317 | orchestrator | 2025-09-03 01:03:44 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:03:44.938001 | orchestrator | 2025-09-03 01:03:44 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:03:44.938065 | orchestrator | 2025-09-03 01:03:44 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:03:47.998416 | orchestrator | 2025-09-03 01:03:47 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:03:47.999212 | orchestrator | 2025-09-03 01:03:47 | INFO  | Task 88d17896-3116-442f-be7a-a99034bbb9d9 is in state STARTED 2025-09-03 01:03:48.001502 | orchestrator | 2025-09-03 01:03:48 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:03:48.002237 | orchestrator | 2025-09-03 01:03:48 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:03:48.002266 | orchestrator | 2025-09-03 01:03:48 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:03:51.050979 | orchestrator | 2025-09-03 01:03:51 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:03:51.052056 | orchestrator | 2025-09-03 01:03:51 | INFO  | Task 88d17896-3116-442f-be7a-a99034bbb9d9 is in state STARTED 2025-09-03 01:03:51.053399 | orchestrator | 2025-09-03 01:03:51 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:03:51.055448 | orchestrator | 2025-09-03 01:03:51 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:03:51.055476 | orchestrator | 2025-09-03 01:03:51 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:03:54.100596 | orchestrator | 2025-09-03 01:03:54 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:03:54.100731 | orchestrator | 2025-09-03 01:03:54 | INFO  | Task 88d17896-3116-442f-be7a-a99034bbb9d9 is in state STARTED 2025-09-03 01:03:54.102656 | orchestrator | 2025-09-03 01:03:54 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:03:54.104723 | orchestrator | 2025-09-03 01:03:54 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:03:54.105151 | orchestrator | 2025-09-03 01:03:54 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:03:57.144723 | orchestrator | 2025-09-03 01:03:57 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:03:57.146735 | orchestrator | 2025-09-03 01:03:57 | INFO  | Task 88d17896-3116-442f-be7a-a99034bbb9d9 is in state STARTED 2025-09-03 01:03:57.147601 | orchestrator | 2025-09-03 01:03:57 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:03:57.149730 | orchestrator | 2025-09-03 01:03:57 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:03:57.149752 | orchestrator | 2025-09-03 01:03:57 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:04:00.204526 | orchestrator | 2025-09-03 01:04:00 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:04:00.211146 | orchestrator | 2025-09-03 01:04:00 | INFO  | Task 88d17896-3116-442f-be7a-a99034bbb9d9 is in state STARTED 2025-09-03 01:04:00.212262 | orchestrator | 2025-09-03 01:04:00 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:04:00.213126 | orchestrator | 2025-09-03 01:04:00 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:04:00.213140 | orchestrator | 2025-09-03 01:04:00 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:04:03.252269 | orchestrator | 2025-09-03 01:04:03 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:04:03.254766 | orchestrator | 2025-09-03 01:04:03 | INFO  | Task 88d17896-3116-442f-be7a-a99034bbb9d9 is in state STARTED 2025-09-03 01:04:03.256751 | orchestrator | 2025-09-03 01:04:03 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:04:03.258690 | orchestrator | 2025-09-03 01:04:03 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:04:03.259254 | orchestrator | 2025-09-03 01:04:03 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:04:06.303430 | orchestrator | 2025-09-03 01:04:06 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:04:06.304992 | orchestrator | 2025-09-03 01:04:06 | INFO  | Task 88d17896-3116-442f-be7a-a99034bbb9d9 is in state STARTED 2025-09-03 01:04:06.306146 | orchestrator | 2025-09-03 01:04:06 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:04:06.307085 | orchestrator | 2025-09-03 01:04:06 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:04:06.307215 | orchestrator | 2025-09-03 01:04:06 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:04:09.351166 | orchestrator | 2025-09-03 01:04:09 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:04:09.351591 | orchestrator | 2025-09-03 01:04:09 | INFO  | Task 88d17896-3116-442f-be7a-a99034bbb9d9 is in state STARTED 2025-09-03 01:04:09.352985 | orchestrator | 2025-09-03 01:04:09 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:04:09.354218 | orchestrator | 2025-09-03 01:04:09 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:04:09.354262 | orchestrator | 2025-09-03 01:04:09 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:04:12.391058 | orchestrator | 2025-09-03 01:04:12 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:04:12.391485 | orchestrator | 2025-09-03 01:04:12 | INFO  | Task 88d17896-3116-442f-be7a-a99034bbb9d9 is in state STARTED 2025-09-03 01:04:12.392594 | orchestrator | 2025-09-03 01:04:12 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:04:12.393711 | orchestrator | 2025-09-03 01:04:12 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:04:12.393742 | orchestrator | 2025-09-03 01:04:12 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:04:15.455257 | orchestrator | 2025-09-03 01:04:15 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:04:15.455696 | orchestrator | 2025-09-03 01:04:15 | INFO  | Task 88d17896-3116-442f-be7a-a99034bbb9d9 is in state SUCCESS 2025-09-03 01:04:15.457274 | orchestrator | 2025-09-03 01:04:15.457306 | orchestrator | 2025-09-03 01:04:15.457319 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 01:04:15.457332 | orchestrator | 2025-09-03 01:04:15.457344 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 01:04:15.457355 | orchestrator | Wednesday 03 September 2025 01:03:09 +0000 (0:00:00.266) 0:00:00.266 *** 2025-09-03 01:04:15.457367 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:04:15.457381 | orchestrator | ok: [testbed-node-1] 2025-09-03 01:04:15.457392 | orchestrator | ok: [testbed-node-2] 2025-09-03 01:04:15.457403 | orchestrator | 2025-09-03 01:04:15.457415 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 01:04:15.457427 | orchestrator | Wednesday 03 September 2025 01:03:09 +0000 (0:00:00.269) 0:00:00.536 *** 2025-09-03 01:04:15.457439 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-03 01:04:15.457450 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-03 01:04:15.457461 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-03 01:04:15.457473 | orchestrator | 2025-09-03 01:04:15.457484 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-03 01:04:15.457495 | orchestrator | 2025-09-03 01:04:15.457506 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-03 01:04:15.457517 | orchestrator | Wednesday 03 September 2025 01:03:09 +0000 (0:00:00.413) 0:00:00.949 *** 2025-09-03 01:04:15.457529 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 01:04:15.457540 | orchestrator | 2025-09-03 01:04:15.457551 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-03 01:04:15.457580 | orchestrator | Wednesday 03 September 2025 01:03:10 +0000 (0:00:00.533) 0:00:01.483 *** 2025-09-03 01:04:15.457592 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-03 01:04:15.457680 | orchestrator | 2025-09-03 01:04:15.457693 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-03 01:04:15.457704 | orchestrator | Wednesday 03 September 2025 01:03:13 +0000 (0:00:03.463) 0:00:04.946 *** 2025-09-03 01:04:15.457715 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-03 01:04:15.457727 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-03 01:04:15.457738 | orchestrator | 2025-09-03 01:04:15.457749 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-03 01:04:15.457760 | orchestrator | Wednesday 03 September 2025 01:03:20 +0000 (0:00:06.251) 0:00:11.198 *** 2025-09-03 01:04:15.457771 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-03 01:04:15.457782 | orchestrator | 2025-09-03 01:04:15.457793 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-03 01:04:15.457829 | orchestrator | Wednesday 03 September 2025 01:03:23 +0000 (0:00:03.523) 0:00:14.721 *** 2025-09-03 01:04:15.457841 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-03 01:04:15.457852 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-03 01:04:15.457863 | orchestrator | 2025-09-03 01:04:15.457874 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-03 01:04:15.457885 | orchestrator | Wednesday 03 September 2025 01:03:27 +0000 (0:00:03.870) 0:00:18.592 *** 2025-09-03 01:04:15.457895 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-03 01:04:15.457906 | orchestrator | 2025-09-03 01:04:15.457939 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-03 01:04:15.457951 | orchestrator | Wednesday 03 September 2025 01:03:30 +0000 (0:00:03.263) 0:00:21.856 *** 2025-09-03 01:04:15.457962 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-03 01:04:15.457973 | orchestrator | 2025-09-03 01:04:15.457985 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-03 01:04:15.457996 | orchestrator | Wednesday 03 September 2025 01:03:34 +0000 (0:00:04.083) 0:00:25.939 *** 2025-09-03 01:04:15.458007 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:04:15.458090 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:04:15.458106 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:04:15.458117 | orchestrator | 2025-09-03 01:04:15.458129 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-03 01:04:15.458140 | orchestrator | Wednesday 03 September 2025 01:03:35 +0000 (0:00:00.297) 0:00:26.236 *** 2025-09-03 01:04:15.458154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-03 01:04:15.458185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-03 01:04:15.458207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-03 01:04:15.458229 | orchestrator | 2025-09-03 01:04:15.458241 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-03 01:04:15.458252 | orchestrator | Wednesday 03 September 2025 01:03:36 +0000 (0:00:00.796) 0:00:27.033 *** 2025-09-03 01:04:15.458263 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:04:15.458274 | orchestrator | 2025-09-03 01:04:15.458285 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-03 01:04:15.458296 | orchestrator | Wednesday 03 September 2025 01:03:36 +0000 (0:00:00.135) 0:00:27.168 *** 2025-09-03 01:04:15.458405 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:04:15.458420 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:04:15.458434 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:04:15.458447 | orchestrator | 2025-09-03 01:04:15.458459 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-03 01:04:15.458473 | orchestrator | Wednesday 03 September 2025 01:03:36 +0000 (0:00:00.464) 0:00:27.633 *** 2025-09-03 01:04:15.458486 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 01:04:15.458499 | orchestrator | 2025-09-03 01:04:15.458512 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-03 01:04:15.458526 | orchestrator | Wednesday 03 September 2025 01:03:37 +0000 (0:00:00.511) 0:00:28.144 *** 2025-09-03 01:04:15.458539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-03 01:04:15.458565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-03 01:04:15.458586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-03 01:04:15.458610 | orchestrator | 2025-09-03 01:04:15.458623 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-03 01:04:15.458636 | orchestrator | Wednesday 03 September 2025 01:03:38 +0000 (0:00:01.437) 0:00:29.581 *** 2025-09-03 01:04:15.458650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-03 01:04:15.458664 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:04:15.458678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-03 01:04:15.458690 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:04:15.458709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-03 01:04:15.458721 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:04:15.458732 | orchestrator | 2025-09-03 01:04:15.458743 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-03 01:04:15.458754 | orchestrator | Wednesday 03 September 2025 01:03:39 +0000 (0:00:01.204) 0:00:30.786 *** 2025-09-03 01:04:15.458766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-03 01:04:15.458785 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:04:15.458802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-03 01:04:15.458813 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:04:15.458825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-03 01:04:15.458836 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:04:15.458847 | orchestrator | 2025-09-03 01:04:15.458858 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-03 01:04:15.458870 | orchestrator | Wednesday 03 September 2025 01:03:41 +0000 (0:00:01.243) 0:00:32.030 *** 2025-09-03 01:04:15.458887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-03 01:04:15.458900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-03 01:04:15.458956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-03 01:04:15.458969 | orchestrator | 2025-09-03 01:04:15.458980 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-03 01:04:15.458992 | orchestrator | Wednesday 03 September 2025 01:03:42 +0000 (0:00:01.563) 0:00:33.593 *** 2025-09-03 01:04:15.459003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-03 01:04:15.459015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-03 01:04:15.459035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-03 01:04:15.459054 | orchestrator | 2025-09-03 01:04:15.459065 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-03 01:04:15.459077 | orchestrator | Wednesday 03 September 2025 01:03:45 +0000 (0:00:02.453) 0:00:36.047 *** 2025-09-03 01:04:15.459088 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-03 01:04:15.459099 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-03 01:04:15.459110 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-03 01:04:15.459121 | orchestrator | 2025-09-03 01:04:15.459132 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-03 01:04:15.459148 | orchestrator | Wednesday 03 September 2025 01:03:46 +0000 (0:00:01.876) 0:00:37.924 *** 2025-09-03 01:04:15.459160 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:04:15.459171 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:04:15.459182 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:04:15.459193 | orchestrator | 2025-09-03 01:04:15.459204 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-03 01:04:15.459215 | orchestrator | Wednesday 03 September 2025 01:03:48 +0000 (0:00:01.478) 0:00:39.402 *** 2025-09-03 01:04:15.459227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-03 01:04:15.459238 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:04:15.459250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-03 01:04:15.459261 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:04:15.459286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-03 01:04:15.459298 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:04:15.459309 | orchestrator | 2025-09-03 01:04:15.459321 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-03 01:04:15.459332 | orchestrator | Wednesday 03 September 2025 01:03:48 +0000 (0:00:00.426) 0:00:39.828 *** 2025-09-03 01:04:15.459348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-03 01:04:15.459360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-03 01:04:15.459372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-03 01:04:15.459384 | orchestrator | 2025-09-03 01:04:15.459402 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-03 01:04:15.459413 | orchestrator | Wednesday 03 September 2025 01:03:49 +0000 (0:00:01.068) 0:00:40.896 *** 2025-09-03 01:04:15.459424 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:04:15.459435 | orchestrator | 2025-09-03 01:04:15.459446 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-03 01:04:15.459457 | orchestrator | Wednesday 03 September 2025 01:03:52 +0000 (0:00:02.499) 0:00:43.396 *** 2025-09-03 01:04:15.459468 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:04:15.459479 | orchestrator | 2025-09-03 01:04:15.459490 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-03 01:04:15.459501 | orchestrator | Wednesday 03 September 2025 01:03:54 +0000 (0:00:02.227) 0:00:45.624 *** 2025-09-03 01:04:15.459512 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:04:15.459523 | orchestrator | 2025-09-03 01:04:15.459534 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-03 01:04:15.459545 | orchestrator | Wednesday 03 September 2025 01:04:07 +0000 (0:00:12.690) 0:00:58.314 *** 2025-09-03 01:04:15.459556 | orchestrator | 2025-09-03 01:04:15.459567 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-03 01:04:15.459578 | orchestrator | Wednesday 03 September 2025 01:04:07 +0000 (0:00:00.076) 0:00:58.391 *** 2025-09-03 01:04:15.459589 | orchestrator | 2025-09-03 01:04:15.459606 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-03 01:04:15.459618 | orchestrator | Wednesday 03 September 2025 01:04:07 +0000 (0:00:00.066) 0:00:58.457 *** 2025-09-03 01:04:15.459629 | orchestrator | 2025-09-03 01:04:15.459640 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-03 01:04:15.459650 | orchestrator | Wednesday 03 September 2025 01:04:07 +0000 (0:00:00.071) 0:00:58.528 *** 2025-09-03 01:04:15.459662 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:04:15.459673 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:04:15.459684 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:04:15.459695 | orchestrator | 2025-09-03 01:04:15.459706 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 01:04:15.459718 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-03 01:04:15.459731 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-03 01:04:15.459742 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-03 01:04:15.459753 | orchestrator | 2025-09-03 01:04:15.459764 | orchestrator | 2025-09-03 01:04:15.459775 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 01:04:15.459786 | orchestrator | Wednesday 03 September 2025 01:04:13 +0000 (0:00:05.518) 0:01:04.047 *** 2025-09-03 01:04:15.459797 | orchestrator | =============================================================================== 2025-09-03 01:04:15.459812 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.69s 2025-09-03 01:04:15.459824 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.25s 2025-09-03 01:04:15.459835 | orchestrator | placement : Restart placement-api container ----------------------------- 5.52s 2025-09-03 01:04:15.459846 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.08s 2025-09-03 01:04:15.459857 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.87s 2025-09-03 01:04:15.459867 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.52s 2025-09-03 01:04:15.459878 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.46s 2025-09-03 01:04:15.459889 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.26s 2025-09-03 01:04:15.459900 | orchestrator | placement : Creating placement databases -------------------------------- 2.50s 2025-09-03 01:04:15.459970 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.45s 2025-09-03 01:04:15.459984 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.23s 2025-09-03 01:04:15.459995 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.88s 2025-09-03 01:04:15.460006 | orchestrator | placement : Copying over config.json files for services ----------------- 1.56s 2025-09-03 01:04:15.460017 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.48s 2025-09-03 01:04:15.460028 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.44s 2025-09-03 01:04:15.460039 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.24s 2025-09-03 01:04:15.460050 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.20s 2025-09-03 01:04:15.460061 | orchestrator | placement : Check placement containers ---------------------------------- 1.07s 2025-09-03 01:04:15.460072 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.80s 2025-09-03 01:04:15.460083 | orchestrator | placement : include_tasks ----------------------------------------------- 0.53s 2025-09-03 01:04:15.460094 | orchestrator | 2025-09-03 01:04:15 | INFO  | Task 714ba85e-d92c-41f9-9cbd-998cb17d014e is in state STARTED 2025-09-03 01:04:15.460105 | orchestrator | 2025-09-03 01:04:15 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:04:15.460116 | orchestrator | 2025-09-03 01:04:15 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:04:15.460128 | orchestrator | 2025-09-03 01:04:15 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:04:18.502797 | orchestrator | 2025-09-03 01:04:18 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:04:18.503898 | orchestrator | 2025-09-03 01:04:18 | INFO  | Task 714ba85e-d92c-41f9-9cbd-998cb17d014e is in state STARTED 2025-09-03 01:04:18.505662 | orchestrator | 2025-09-03 01:04:18 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:04:18.507112 | orchestrator | 2025-09-03 01:04:18 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:04:18.507484 | orchestrator | 2025-09-03 01:04:18 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:04:21.539240 | orchestrator | 2025-09-03 01:04:21 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:04:21.539352 | orchestrator | 2025-09-03 01:04:21 | INFO  | Task 714ba85e-d92c-41f9-9cbd-998cb17d014e is in state SUCCESS 2025-09-03 01:04:21.541269 | orchestrator | 2025-09-03 01:04:21 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:04:21.543044 | orchestrator | 2025-09-03 01:04:21 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:04:21.545036 | orchestrator | 2025-09-03 01:04:21 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:04:21.545060 | orchestrator | 2025-09-03 01:04:21 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:04:24.573739 | orchestrator | 2025-09-03 01:04:24 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:04:24.574534 | orchestrator | 2025-09-03 01:04:24 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:04:24.574819 | orchestrator | 2025-09-03 01:04:24 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:04:24.575419 | orchestrator | 2025-09-03 01:04:24 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:04:24.575449 | orchestrator | 2025-09-03 01:04:24 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:04:27.617241 | orchestrator | 2025-09-03 01:04:27 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:04:27.619218 | orchestrator | 2025-09-03 01:04:27 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:04:27.621064 | orchestrator | 2025-09-03 01:04:27 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:04:27.623498 | orchestrator | 2025-09-03 01:04:27 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:04:27.623524 | orchestrator | 2025-09-03 01:04:27 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:04:30.665360 | orchestrator | 2025-09-03 01:04:30 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:04:30.667599 | orchestrator | 2025-09-03 01:04:30 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:04:30.668973 | orchestrator | 2025-09-03 01:04:30 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:04:30.670627 | orchestrator | 2025-09-03 01:04:30 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:04:30.670976 | orchestrator | 2025-09-03 01:04:30 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:04:33.712338 | orchestrator | 2025-09-03 01:04:33 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:04:33.713515 | orchestrator | 2025-09-03 01:04:33 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:04:33.715334 | orchestrator | 2025-09-03 01:04:33 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:04:33.717075 | orchestrator | 2025-09-03 01:04:33 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:04:33.717098 | orchestrator | 2025-09-03 01:04:33 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:04:36.763073 | orchestrator | 2025-09-03 01:04:36 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:04:36.764484 | orchestrator | 2025-09-03 01:04:36 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:04:36.766202 | orchestrator | 2025-09-03 01:04:36 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:04:36.768789 | orchestrator | 2025-09-03 01:04:36 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:04:36.769028 | orchestrator | 2025-09-03 01:04:36 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:04:39.813809 | orchestrator | 2025-09-03 01:04:39 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:04:39.817161 | orchestrator | 2025-09-03 01:04:39 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:04:39.819159 | orchestrator | 2025-09-03 01:04:39 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:04:39.821276 | orchestrator | 2025-09-03 01:04:39 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:04:39.821301 | orchestrator | 2025-09-03 01:04:39 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:04:42.865798 | orchestrator | 2025-09-03 01:04:42 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:04:42.866276 | orchestrator | 2025-09-03 01:04:42 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:04:42.866966 | orchestrator | 2025-09-03 01:04:42 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:04:42.867957 | orchestrator | 2025-09-03 01:04:42 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:04:42.868061 | orchestrator | 2025-09-03 01:04:42 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:04:45.912352 | orchestrator | 2025-09-03 01:04:45 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:04:45.912892 | orchestrator | 2025-09-03 01:04:45 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:04:45.913674 | orchestrator | 2025-09-03 01:04:45 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:04:45.914721 | orchestrator | 2025-09-03 01:04:45 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:04:45.914744 | orchestrator | 2025-09-03 01:04:45 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:04:48.955904 | orchestrator | 2025-09-03 01:04:48 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:04:48.956307 | orchestrator | 2025-09-03 01:04:48 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:04:48.957212 | orchestrator | 2025-09-03 01:04:48 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:04:48.958384 | orchestrator | 2025-09-03 01:04:48 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:04:48.958409 | orchestrator | 2025-09-03 01:04:48 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:04:52.002332 | orchestrator | 2025-09-03 01:04:52 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:04:52.004636 | orchestrator | 2025-09-03 01:04:52 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:04:52.006887 | orchestrator | 2025-09-03 01:04:52 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:04:52.008793 | orchestrator | 2025-09-03 01:04:52 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:04:52.009006 | orchestrator | 2025-09-03 01:04:52 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:04:55.048289 | orchestrator | 2025-09-03 01:04:55 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:04:55.049522 | orchestrator | 2025-09-03 01:04:55 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:04:55.051646 | orchestrator | 2025-09-03 01:04:55 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:04:55.054320 | orchestrator | 2025-09-03 01:04:55 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:04:55.054679 | orchestrator | 2025-09-03 01:04:55 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:04:58.098135 | orchestrator | 2025-09-03 01:04:58 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:04:58.098377 | orchestrator | 2025-09-03 01:04:58 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:04:58.099384 | orchestrator | 2025-09-03 01:04:58 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:04:58.100250 | orchestrator | 2025-09-03 01:04:58 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:04:58.100275 | orchestrator | 2025-09-03 01:04:58 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:05:01.148910 | orchestrator | 2025-09-03 01:05:01 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:05:01.151850 | orchestrator | 2025-09-03 01:05:01 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:05:01.154181 | orchestrator | 2025-09-03 01:05:01 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:05:01.156856 | orchestrator | 2025-09-03 01:05:01 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:05:01.157071 | orchestrator | 2025-09-03 01:05:01 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:05:04.202345 | orchestrator | 2025-09-03 01:05:04 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:05:04.204122 | orchestrator | 2025-09-03 01:05:04 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:05:04.205967 | orchestrator | 2025-09-03 01:05:04 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:05:04.208386 | orchestrator | 2025-09-03 01:05:04 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:05:04.208502 | orchestrator | 2025-09-03 01:05:04 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:05:07.253581 | orchestrator | 2025-09-03 01:05:07 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:05:07.254866 | orchestrator | 2025-09-03 01:05:07 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:05:07.256526 | orchestrator | 2025-09-03 01:05:07 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:05:07.258299 | orchestrator | 2025-09-03 01:05:07 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:05:07.258323 | orchestrator | 2025-09-03 01:05:07 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:05:10.298079 | orchestrator | 2025-09-03 01:05:10 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:05:10.298213 | orchestrator | 2025-09-03 01:05:10 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:05:10.298261 | orchestrator | 2025-09-03 01:05:10 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:05:10.299146 | orchestrator | 2025-09-03 01:05:10 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:05:10.299192 | orchestrator | 2025-09-03 01:05:10 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:05:13.333361 | orchestrator | 2025-09-03 01:05:13 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:05:13.333488 | orchestrator | 2025-09-03 01:05:13 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state STARTED 2025-09-03 01:05:13.334853 | orchestrator | 2025-09-03 01:05:13 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:05:13.335585 | orchestrator | 2025-09-03 01:05:13 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:05:13.335608 | orchestrator | 2025-09-03 01:05:13 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:05:16.359850 | orchestrator | 2025-09-03 01:05:16 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:05:16.360580 | orchestrator | 2025-09-03 01:05:16 | INFO  | Task 6badcc18-5225-4448-a1f9-07f98f140883 is in state SUCCESS 2025-09-03 01:05:16.362512 | orchestrator | 2025-09-03 01:05:16.362545 | orchestrator | 2025-09-03 01:05:16.362558 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 01:05:16.362570 | orchestrator | 2025-09-03 01:05:16.362582 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 01:05:16.362594 | orchestrator | Wednesday 03 September 2025 01:04:17 +0000 (0:00:00.183) 0:00:00.183 *** 2025-09-03 01:05:16.362638 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:05:16.362654 | orchestrator | ok: [testbed-node-1] 2025-09-03 01:05:16.362666 | orchestrator | ok: [testbed-node-2] 2025-09-03 01:05:16.362677 | orchestrator | 2025-09-03 01:05:16.362688 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 01:05:16.362700 | orchestrator | Wednesday 03 September 2025 01:04:18 +0000 (0:00:00.350) 0:00:00.534 *** 2025-09-03 01:05:16.362711 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-03 01:05:16.362723 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-03 01:05:16.362734 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-03 01:05:16.362745 | orchestrator | 2025-09-03 01:05:16.362756 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-03 01:05:16.362766 | orchestrator | 2025-09-03 01:05:16.362778 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-03 01:05:16.362788 | orchestrator | Wednesday 03 September 2025 01:04:19 +0000 (0:00:00.755) 0:00:01.289 *** 2025-09-03 01:05:16.362799 | orchestrator | ok: [testbed-node-1] 2025-09-03 01:05:16.362811 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:05:16.362822 | orchestrator | ok: [testbed-node-2] 2025-09-03 01:05:16.362832 | orchestrator | 2025-09-03 01:05:16.362843 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 01:05:16.362855 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 01:05:16.362869 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 01:05:16.363212 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 01:05:16.363227 | orchestrator | 2025-09-03 01:05:16.363238 | orchestrator | 2025-09-03 01:05:16.363249 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 01:05:16.363260 | orchestrator | Wednesday 03 September 2025 01:04:20 +0000 (0:00:00.980) 0:00:02.270 *** 2025-09-03 01:05:16.363271 | orchestrator | =============================================================================== 2025-09-03 01:05:16.363282 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.98s 2025-09-03 01:05:16.363294 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.76s 2025-09-03 01:05:16.363304 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2025-09-03 01:05:16.363315 | orchestrator | 2025-09-03 01:05:16.363326 | orchestrator | 2025-09-03 01:05:16.363337 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 01:05:16.363348 | orchestrator | 2025-09-03 01:05:16.363359 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 01:05:16.363370 | orchestrator | Wednesday 03 September 2025 01:03:26 +0000 (0:00:00.189) 0:00:00.189 *** 2025-09-03 01:05:16.363381 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:05:16.363392 | orchestrator | ok: [testbed-node-1] 2025-09-03 01:05:16.363403 | orchestrator | ok: [testbed-node-2] 2025-09-03 01:05:16.363414 | orchestrator | 2025-09-03 01:05:16.363425 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 01:05:16.363435 | orchestrator | Wednesday 03 September 2025 01:03:27 +0000 (0:00:00.212) 0:00:00.401 *** 2025-09-03 01:05:16.363446 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-03 01:05:16.363457 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-03 01:05:16.363468 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-03 01:05:16.363479 | orchestrator | 2025-09-03 01:05:16.363490 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-03 01:05:16.363500 | orchestrator | 2025-09-03 01:05:16.363512 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-03 01:05:16.363534 | orchestrator | Wednesday 03 September 2025 01:03:27 +0000 (0:00:00.288) 0:00:00.690 *** 2025-09-03 01:05:16.363572 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 01:05:16.363585 | orchestrator | 2025-09-03 01:05:16.363596 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-03 01:05:16.363607 | orchestrator | Wednesday 03 September 2025 01:03:27 +0000 (0:00:00.381) 0:00:01.072 *** 2025-09-03 01:05:16.363618 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-03 01:05:16.363629 | orchestrator | 2025-09-03 01:05:16.363640 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-03 01:05:16.363651 | orchestrator | Wednesday 03 September 2025 01:03:31 +0000 (0:00:03.450) 0:00:04.522 *** 2025-09-03 01:05:16.363662 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-03 01:05:16.363673 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-03 01:05:16.363684 | orchestrator | 2025-09-03 01:05:16.363695 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-03 01:05:16.363706 | orchestrator | Wednesday 03 September 2025 01:03:37 +0000 (0:00:06.365) 0:00:10.888 *** 2025-09-03 01:05:16.363717 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-03 01:05:16.363728 | orchestrator | 2025-09-03 01:05:16.363739 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-03 01:05:16.363750 | orchestrator | Wednesday 03 September 2025 01:03:40 +0000 (0:00:03.284) 0:00:14.172 *** 2025-09-03 01:05:16.363771 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-03 01:05:16.363783 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-03 01:05:16.363794 | orchestrator | 2025-09-03 01:05:16.363805 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-03 01:05:16.363816 | orchestrator | Wednesday 03 September 2025 01:03:44 +0000 (0:00:03.739) 0:00:17.911 *** 2025-09-03 01:05:16.363827 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-03 01:05:16.363838 | orchestrator | 2025-09-03 01:05:16.363849 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-03 01:05:16.363860 | orchestrator | Wednesday 03 September 2025 01:03:47 +0000 (0:00:03.286) 0:00:21.198 *** 2025-09-03 01:05:16.363870 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-03 01:05:16.363881 | orchestrator | 2025-09-03 01:05:16.363892 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-03 01:05:16.363903 | orchestrator | Wednesday 03 September 2025 01:03:52 +0000 (0:00:04.379) 0:00:25.578 *** 2025-09-03 01:05:16.363940 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:16.363952 | orchestrator | 2025-09-03 01:05:16.363963 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-03 01:05:16.363974 | orchestrator | Wednesday 03 September 2025 01:03:55 +0000 (0:00:03.315) 0:00:28.894 *** 2025-09-03 01:05:16.363985 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:16.363996 | orchestrator | 2025-09-03 01:05:16.364007 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-03 01:05:16.364018 | orchestrator | Wednesday 03 September 2025 01:03:59 +0000 (0:00:03.872) 0:00:32.766 *** 2025-09-03 01:05:16.364029 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:16.364040 | orchestrator | 2025-09-03 01:05:16.364050 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-03 01:05:16.364061 | orchestrator | Wednesday 03 September 2025 01:04:03 +0000 (0:00:03.757) 0:00:36.524 *** 2025-09-03 01:05:16.364077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-03 01:05:16.364106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-03 01:05:16.364119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-03 01:05:16.364141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:16.364155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:16.364166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:16.364185 | orchestrator | 2025-09-03 01:05:16.364197 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-03 01:05:16.364208 | orchestrator | Wednesday 03 September 2025 01:04:04 +0000 (0:00:01.299) 0:00:37.823 *** 2025-09-03 01:05:16.364219 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:16.364230 | orchestrator | 2025-09-03 01:05:16.364241 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-03 01:05:16.364251 | orchestrator | Wednesday 03 September 2025 01:04:04 +0000 (0:00:00.139) 0:00:37.963 *** 2025-09-03 01:05:16.364262 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:16.364273 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:16.364284 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:16.364295 | orchestrator | 2025-09-03 01:05:16.364306 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-03 01:05:16.364317 | orchestrator | Wednesday 03 September 2025 01:04:05 +0000 (0:00:00.536) 0:00:38.500 *** 2025-09-03 01:05:16.364328 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-03 01:05:16.364339 | orchestrator | 2025-09-03 01:05:16.364350 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-03 01:05:16.364360 | orchestrator | Wednesday 03 September 2025 01:04:06 +0000 (0:00:00.792) 0:00:39.293 *** 2025-09-03 01:05:16.364377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-03 01:05:16.364399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-03 01:05:16.364411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-03 01:05:16.364429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:16.364441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:16.364458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:16.364470 | orchestrator | 2025-09-03 01:05:16.364481 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-03 01:05:16.364492 | orchestrator | Wednesday 03 September 2025 01:04:08 +0000 (0:00:02.454) 0:00:41.747 *** 2025-09-03 01:05:16.364503 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:05:16.364514 | orchestrator | ok: [testbed-node-1] 2025-09-03 01:05:16.364525 | orchestrator | ok: [testbed-node-2] 2025-09-03 01:05:16.364536 | orchestrator | 2025-09-03 01:05:16.364547 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-03 01:05:16.364564 | orchestrator | Wednesday 03 September 2025 01:04:08 +0000 (0:00:00.313) 0:00:42.061 *** 2025-09-03 01:05:16.364576 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 01:05:16.364588 | orchestrator | 2025-09-03 01:05:16.364599 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-03 01:05:16.364610 | orchestrator | Wednesday 03 September 2025 01:04:09 +0000 (0:00:00.684) 0:00:42.746 *** 2025-09-03 01:05:16.364621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-03 01:05:16.364640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-03 01:05:16.364675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-03 01:05:16.364687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:16.364708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:16.364727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:16.364738 | orchestrator | 2025-09-03 01:05:16.364749 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-03 01:05:16.364760 | orchestrator | Wednesday 03 September 2025 01:04:11 +0000 (0:00:02.445) 0:00:45.191 *** 2025-09-03 01:05:16.364772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-03 01:05:16.364783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-03 01:05:16.364795 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:16.364812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-03 01:05:16.364833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-03 01:05:16.364852 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:16.364864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-03 01:05:16.364876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-03 01:05:16.364887 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:16.364898 | orchestrator | 2025-09-03 01:05:16.364909 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-03 01:05:16.364955 | orchestrator | Wednesday 03 September 2025 01:04:12 +0000 (0:00:00.625) 0:00:45.817 *** 2025-09-03 01:05:16.364972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-03 01:05:16.364985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-03 01:05:16.365004 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:16.365022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-03 01:05:16.365034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-03 01:05:16.365046 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:16.365057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-03 01:05:16.365074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-03 01:05:16.365085 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:16.365096 | orchestrator | 2025-09-03 01:05:16.365108 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-03 01:05:16.365119 | orchestrator | Wednesday 03 September 2025 01:04:13 +0000 (0:00:00.974) 0:00:46.791 *** 2025-09-03 01:05:16.365137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:2025-09-03 01:05:16 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:05:16.365161 | orchestrator | 9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-03 01:05:16.365175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-03 01:05:16.365187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-03 01:05:16.365199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:16.365215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:16.365241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:16.365253 | orchestrator | 2025-09-03 01:05:16.365264 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-03 01:05:16.365275 | orchestrator | Wednesday 03 September 2025 01:04:16 +0000 (0:00:02.578) 0:00:49.369 *** 2025-09-03 01:05:16.365287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-03 01:05:16.365299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-03 01:05:16.365315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-03 01:05:16.365334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:16.365352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:16.365363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:16.365375 | orchestrator | 2025-09-03 01:05:16.365386 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-03 01:05:16.365397 | orchestrator | Wednesday 03 September 2025 01:04:21 +0000 (0:00:05.661) 0:00:55.031 *** 2025-09-03 01:05:16.365408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-03 01:05:16.365424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-03 01:05:16.365443 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:16.365454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-03 01:05:16.365473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-03 01:05:16.365484 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:16.365496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-03 01:05:16.365507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-03 01:05:16.365519 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:16.365530 | orchestrator | 2025-09-03 01:05:16.365541 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-03 01:05:16.365552 | orchestrator | Wednesday 03 September 2025 01:04:22 +0000 (0:00:00.626) 0:00:55.657 *** 2025-09-03 01:05:16.365568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-03 01:05:16.365594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-03 01:05:16.365606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-03 01:05:16.365618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:16.365629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:16.365653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:16.365665 | orchestrator | 2025-09-03 01:05:16.365676 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-03 01:05:16.365687 | orchestrator | Wednesday 03 September 2025 01:04:24 +0000 (0:00:02.393) 0:00:58.051 *** 2025-09-03 01:05:16.365698 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:16.365709 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:16.365720 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:16.365731 | orchestrator | 2025-09-03 01:05:16.365742 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-03 01:05:16.365753 | orchestrator | Wednesday 03 September 2025 01:04:25 +0000 (0:00:00.396) 0:00:58.447 *** 2025-09-03 01:05:16.365764 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:16.365774 | orchestrator | 2025-09-03 01:05:16.365785 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-03 01:05:16.365796 | orchestrator | Wednesday 03 September 2025 01:04:27 +0000 (0:00:02.253) 0:01:00.701 *** 2025-09-03 01:05:16.365807 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:16.365818 | orchestrator | 2025-09-03 01:05:16.365836 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-03 01:05:16.365848 | orchestrator | Wednesday 03 September 2025 01:04:29 +0000 (0:00:02.286) 0:01:02.987 *** 2025-09-03 01:05:16.365858 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:16.365869 | orchestrator | 2025-09-03 01:05:16.365880 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-03 01:05:16.365891 | orchestrator | Wednesday 03 September 2025 01:04:45 +0000 (0:00:15.861) 0:01:18.849 *** 2025-09-03 01:05:16.365902 | orchestrator | 2025-09-03 01:05:16.366176 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-03 01:05:16.366196 | orchestrator | Wednesday 03 September 2025 01:04:45 +0000 (0:00:00.070) 0:01:18.919 *** 2025-09-03 01:05:16.366207 | orchestrator | 2025-09-03 01:05:16.366218 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-03 01:05:16.366229 | orchestrator | Wednesday 03 September 2025 01:04:45 +0000 (0:00:00.103) 0:01:19.023 *** 2025-09-03 01:05:16.366240 | orchestrator | 2025-09-03 01:05:16.366250 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-03 01:05:16.366261 | orchestrator | Wednesday 03 September 2025 01:04:45 +0000 (0:00:00.167) 0:01:19.190 *** 2025-09-03 01:05:16.366272 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:16.366283 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:05:16.366294 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:05:16.366305 | orchestrator | 2025-09-03 01:05:16.366317 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-03 01:05:16.366327 | orchestrator | Wednesday 03 September 2025 01:05:04 +0000 (0:00:18.743) 0:01:37.933 *** 2025-09-03 01:05:16.366338 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:16.366350 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:05:16.366361 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:05:16.366371 | orchestrator | 2025-09-03 01:05:16.366382 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 01:05:16.366393 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-03 01:05:16.366415 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-03 01:05:16.366427 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-03 01:05:16.366438 | orchestrator | 2025-09-03 01:05:16.366448 | orchestrator | 2025-09-03 01:05:16.366460 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 01:05:16.366469 | orchestrator | Wednesday 03 September 2025 01:05:15 +0000 (0:00:11.001) 0:01:48.935 *** 2025-09-03 01:05:16.366479 | orchestrator | =============================================================================== 2025-09-03 01:05:16.366489 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 18.74s 2025-09-03 01:05:16.366499 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.86s 2025-09-03 01:05:16.366509 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.00s 2025-09-03 01:05:16.366519 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.37s 2025-09-03 01:05:16.366528 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.66s 2025-09-03 01:05:16.366538 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.38s 2025-09-03 01:05:16.366547 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.87s 2025-09-03 01:05:16.366557 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.76s 2025-09-03 01:05:16.366567 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.74s 2025-09-03 01:05:16.366576 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.45s 2025-09-03 01:05:16.366586 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.32s 2025-09-03 01:05:16.366596 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.29s 2025-09-03 01:05:16.366611 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.28s 2025-09-03 01:05:16.366621 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.58s 2025-09-03 01:05:16.366630 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.45s 2025-09-03 01:05:16.366640 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.45s 2025-09-03 01:05:16.366650 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.39s 2025-09-03 01:05:16.366659 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.29s 2025-09-03 01:05:16.366669 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.25s 2025-09-03 01:05:16.366679 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.30s 2025-09-03 01:05:16.366688 | orchestrator | 2025-09-03 01:05:16 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:05:16.366699 | orchestrator | 2025-09-03 01:05:16 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:05:19.384490 | orchestrator | 2025-09-03 01:05:19 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:05:19.385405 | orchestrator | 2025-09-03 01:05:19 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:05:19.385997 | orchestrator | 2025-09-03 01:05:19 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:05:19.386132 | orchestrator | 2025-09-03 01:05:19 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:05:22.413497 | orchestrator | 2025-09-03 01:05:22 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:05:22.415162 | orchestrator | 2025-09-03 01:05:22 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:05:22.416845 | orchestrator | 2025-09-03 01:05:22 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:05:22.416874 | orchestrator | 2025-09-03 01:05:22 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:05:25.454518 | orchestrator | 2025-09-03 01:05:25 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:05:25.456079 | orchestrator | 2025-09-03 01:05:25 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:05:25.457537 | orchestrator | 2025-09-03 01:05:25 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:05:25.458203 | orchestrator | 2025-09-03 01:05:25 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:05:28.508292 | orchestrator | 2025-09-03 01:05:28 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:05:28.509561 | orchestrator | 2025-09-03 01:05:28 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:05:28.511024 | orchestrator | 2025-09-03 01:05:28 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:05:28.511054 | orchestrator | 2025-09-03 01:05:28 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:05:31.549441 | orchestrator | 2025-09-03 01:05:31 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:05:31.551733 | orchestrator | 2025-09-03 01:05:31 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:05:31.553996 | orchestrator | 2025-09-03 01:05:31 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:05:31.554449 | orchestrator | 2025-09-03 01:05:31 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:05:34.598752 | orchestrator | 2025-09-03 01:05:34 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:05:34.599624 | orchestrator | 2025-09-03 01:05:34 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:05:34.603466 | orchestrator | 2025-09-03 01:05:34 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:05:34.603494 | orchestrator | 2025-09-03 01:05:34 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:05:37.649473 | orchestrator | 2025-09-03 01:05:37 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:05:37.650756 | orchestrator | 2025-09-03 01:05:37 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:05:37.652530 | orchestrator | 2025-09-03 01:05:37 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:05:37.652560 | orchestrator | 2025-09-03 01:05:37 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:05:40.695175 | orchestrator | 2025-09-03 01:05:40 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:05:40.696311 | orchestrator | 2025-09-03 01:05:40 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:05:40.698145 | orchestrator | 2025-09-03 01:05:40 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:05:40.698205 | orchestrator | 2025-09-03 01:05:40 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:05:43.751096 | orchestrator | 2025-09-03 01:05:43 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:05:43.751226 | orchestrator | 2025-09-03 01:05:43 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:05:43.751274 | orchestrator | 2025-09-03 01:05:43 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:05:43.751287 | orchestrator | 2025-09-03 01:05:43 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:05:46.803029 | orchestrator | 2025-09-03 01:05:46 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:05:46.804102 | orchestrator | 2025-09-03 01:05:46 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:05:46.805688 | orchestrator | 2025-09-03 01:05:46 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:05:46.805720 | orchestrator | 2025-09-03 01:05:46 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:05:49.853259 | orchestrator | 2025-09-03 01:05:49 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state STARTED 2025-09-03 01:05:49.854245 | orchestrator | 2025-09-03 01:05:49 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:05:49.855450 | orchestrator | 2025-09-03 01:05:49 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:05:49.855487 | orchestrator | 2025-09-03 01:05:49 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:05:52.900073 | orchestrator | 2025-09-03 01:05:52 | INFO  | Task b965d647-a8dd-495c-afcb-f6fed92ce4b2 is in state SUCCESS 2025-09-03 01:05:52.901838 | orchestrator | 2025-09-03 01:05:52.901882 | orchestrator | 2025-09-03 01:05:52.901896 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 01:05:52.901936 | orchestrator | 2025-09-03 01:05:52.901958 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-03 01:05:52.901978 | orchestrator | Wednesday 03 September 2025 00:57:29 +0000 (0:00:00.267) 0:00:00.267 *** 2025-09-03 01:05:52.901997 | orchestrator | changed: [testbed-manager] 2025-09-03 01:05:52.902211 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:52.902235 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:05:52.902247 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:05:52.902258 | orchestrator | changed: [testbed-node-3] 2025-09-03 01:05:52.902270 | orchestrator | changed: [testbed-node-4] 2025-09-03 01:05:52.902282 | orchestrator | changed: [testbed-node-5] 2025-09-03 01:05:52.902293 | orchestrator | 2025-09-03 01:05:52.902305 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 01:05:52.902318 | orchestrator | Wednesday 03 September 2025 00:57:30 +0000 (0:00:00.843) 0:00:01.111 *** 2025-09-03 01:05:52.902330 | orchestrator | changed: [testbed-manager] 2025-09-03 01:05:52.902341 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:52.902353 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:05:52.902364 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:05:52.902375 | orchestrator | changed: [testbed-node-3] 2025-09-03 01:05:52.902386 | orchestrator | changed: [testbed-node-4] 2025-09-03 01:05:52.902398 | orchestrator | changed: [testbed-node-5] 2025-09-03 01:05:52.902409 | orchestrator | 2025-09-03 01:05:52.902423 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 01:05:52.902437 | orchestrator | Wednesday 03 September 2025 00:57:31 +0000 (0:00:00.617) 0:00:01.728 *** 2025-09-03 01:05:52.902451 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-03 01:05:52.902465 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-03 01:05:52.902478 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-03 01:05:52.902492 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-03 01:05:52.902505 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-03 01:05:52.902518 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-03 01:05:52.902530 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-03 01:05:52.902573 | orchestrator | 2025-09-03 01:05:52.902586 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-03 01:05:52.902599 | orchestrator | 2025-09-03 01:05:52.902612 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-03 01:05:52.902626 | orchestrator | Wednesday 03 September 2025 00:57:31 +0000 (0:00:00.706) 0:00:02.435 *** 2025-09-03 01:05:52.902640 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 01:05:52.902652 | orchestrator | 2025-09-03 01:05:52.902665 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-03 01:05:52.903178 | orchestrator | Wednesday 03 September 2025 00:57:32 +0000 (0:00:00.830) 0:00:03.266 *** 2025-09-03 01:05:52.903198 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-03 01:05:52.903210 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-03 01:05:52.903222 | orchestrator | 2025-09-03 01:05:52.903233 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-03 01:05:52.903244 | orchestrator | Wednesday 03 September 2025 00:57:36 +0000 (0:00:03.451) 0:00:06.717 *** 2025-09-03 01:05:52.903255 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-03 01:05:52.903267 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-03 01:05:52.903278 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:52.903289 | orchestrator | 2025-09-03 01:05:52.903300 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-03 01:05:52.903311 | orchestrator | Wednesday 03 September 2025 00:57:39 +0000 (0:00:03.461) 0:00:10.179 *** 2025-09-03 01:05:52.903322 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:52.903333 | orchestrator | 2025-09-03 01:05:52.903344 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-03 01:05:52.903354 | orchestrator | Wednesday 03 September 2025 00:57:40 +0000 (0:00:00.706) 0:00:10.885 *** 2025-09-03 01:05:52.903365 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:52.903376 | orchestrator | 2025-09-03 01:05:52.903387 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-03 01:05:52.903398 | orchestrator | Wednesday 03 September 2025 00:57:41 +0000 (0:00:01.340) 0:00:12.226 *** 2025-09-03 01:05:52.903408 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:52.903419 | orchestrator | 2025-09-03 01:05:52.903430 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-03 01:05:52.903441 | orchestrator | Wednesday 03 September 2025 00:57:44 +0000 (0:00:03.056) 0:00:15.282 *** 2025-09-03 01:05:52.903452 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.903463 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.903474 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.903485 | orchestrator | 2025-09-03 01:05:52.903496 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-03 01:05:52.903507 | orchestrator | Wednesday 03 September 2025 00:57:44 +0000 (0:00:00.283) 0:00:15.566 *** 2025-09-03 01:05:52.903518 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:05:52.903529 | orchestrator | 2025-09-03 01:05:52.904194 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-03 01:05:52.904278 | orchestrator | Wednesday 03 September 2025 00:58:14 +0000 (0:00:29.065) 0:00:44.632 *** 2025-09-03 01:05:52.904299 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:52.904320 | orchestrator | 2025-09-03 01:05:52.904338 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-03 01:05:52.904357 | orchestrator | Wednesday 03 September 2025 00:58:25 +0000 (0:00:11.784) 0:00:56.417 *** 2025-09-03 01:05:52.904375 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:05:52.904392 | orchestrator | 2025-09-03 01:05:52.904410 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-03 01:05:52.904427 | orchestrator | Wednesday 03 September 2025 00:58:35 +0000 (0:00:09.516) 0:01:05.933 *** 2025-09-03 01:05:52.904498 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:05:52.904558 | orchestrator | 2025-09-03 01:05:52.904571 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-03 01:05:52.904582 | orchestrator | Wednesday 03 September 2025 00:58:36 +0000 (0:00:00.964) 0:01:06.897 *** 2025-09-03 01:05:52.904592 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.904603 | orchestrator | 2025-09-03 01:05:52.904614 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-03 01:05:52.904625 | orchestrator | Wednesday 03 September 2025 00:58:36 +0000 (0:00:00.514) 0:01:07.411 *** 2025-09-03 01:05:52.904635 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 01:05:52.904646 | orchestrator | 2025-09-03 01:05:52.904657 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-03 01:05:52.904668 | orchestrator | Wednesday 03 September 2025 00:58:37 +0000 (0:00:01.030) 0:01:08.442 *** 2025-09-03 01:05:52.905322 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:05:52.905336 | orchestrator | 2025-09-03 01:05:52.905347 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-03 01:05:52.905357 | orchestrator | Wednesday 03 September 2025 00:58:54 +0000 (0:00:16.347) 0:01:24.789 *** 2025-09-03 01:05:52.905367 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.905377 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.905387 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.905397 | orchestrator | 2025-09-03 01:05:52.905407 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-03 01:05:52.905417 | orchestrator | 2025-09-03 01:05:52.905426 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-03 01:05:52.905436 | orchestrator | Wednesday 03 September 2025 00:58:54 +0000 (0:00:00.298) 0:01:25.088 *** 2025-09-03 01:05:52.905446 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 01:05:52.905455 | orchestrator | 2025-09-03 01:05:52.905465 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-03 01:05:52.905475 | orchestrator | Wednesday 03 September 2025 00:58:55 +0000 (0:00:00.576) 0:01:25.665 *** 2025-09-03 01:05:52.905485 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.905494 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.905505 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:52.905514 | orchestrator | 2025-09-03 01:05:52.905524 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-03 01:05:52.905534 | orchestrator | Wednesday 03 September 2025 00:58:56 +0000 (0:00:01.894) 0:01:27.559 *** 2025-09-03 01:05:52.905543 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.905553 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.905563 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:52.905573 | orchestrator | 2025-09-03 01:05:52.905583 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-03 01:05:52.905592 | orchestrator | Wednesday 03 September 2025 00:58:59 +0000 (0:00:02.204) 0:01:29.764 *** 2025-09-03 01:05:52.905602 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.905612 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.905622 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.905631 | orchestrator | 2025-09-03 01:05:52.905641 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-03 01:05:52.905660 | orchestrator | Wednesday 03 September 2025 00:58:59 +0000 (0:00:00.751) 0:01:30.516 *** 2025-09-03 01:05:52.905670 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-03 01:05:52.905680 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.905690 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-03 01:05:52.905700 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.905710 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-03 01:05:52.905720 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-03 01:05:52.905730 | orchestrator | 2025-09-03 01:05:52.905750 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-03 01:05:52.905760 | orchestrator | Wednesday 03 September 2025 00:59:09 +0000 (0:00:09.310) 0:01:39.826 *** 2025-09-03 01:05:52.905770 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.905779 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.905789 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.905799 | orchestrator | 2025-09-03 01:05:52.905809 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-03 01:05:52.905819 | orchestrator | Wednesday 03 September 2025 00:59:10 +0000 (0:00:00.927) 0:01:40.753 *** 2025-09-03 01:05:52.905829 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-03 01:05:52.905838 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.905848 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-03 01:05:52.905858 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.905868 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-03 01:05:52.905878 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.905887 | orchestrator | 2025-09-03 01:05:52.905897 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-03 01:05:52.905907 | orchestrator | Wednesday 03 September 2025 00:59:11 +0000 (0:00:01.250) 0:01:42.004 *** 2025-09-03 01:05:52.905947 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.905960 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.905971 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:52.905983 | orchestrator | 2025-09-03 01:05:52.905995 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-03 01:05:52.906185 | orchestrator | Wednesday 03 September 2025 00:59:11 +0000 (0:00:00.456) 0:01:42.460 *** 2025-09-03 01:05:52.906211 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.906298 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.906321 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:52.906337 | orchestrator | 2025-09-03 01:05:52.906353 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-03 01:05:52.906363 | orchestrator | Wednesday 03 September 2025 00:59:13 +0000 (0:00:01.211) 0:01:43.671 *** 2025-09-03 01:05:52.906373 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.906383 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.906473 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:52.906488 | orchestrator | 2025-09-03 01:05:52.906498 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-03 01:05:52.906508 | orchestrator | Wednesday 03 September 2025 00:59:15 +0000 (0:00:02.435) 0:01:46.107 *** 2025-09-03 01:05:52.906518 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.906527 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.906537 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:05:52.906547 | orchestrator | 2025-09-03 01:05:52.906557 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-03 01:05:52.906567 | orchestrator | Wednesday 03 September 2025 00:59:36 +0000 (0:00:21.395) 0:02:07.502 *** 2025-09-03 01:05:52.906576 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.906586 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.906596 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:05:52.906606 | orchestrator | 2025-09-03 01:05:52.906616 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-03 01:05:52.906625 | orchestrator | Wednesday 03 September 2025 00:59:48 +0000 (0:00:11.107) 0:02:18.609 *** 2025-09-03 01:05:52.906635 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:05:52.906645 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.906655 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.906665 | orchestrator | 2025-09-03 01:05:52.906674 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-03 01:05:52.906684 | orchestrator | Wednesday 03 September 2025 00:59:49 +0000 (0:00:01.559) 0:02:20.168 *** 2025-09-03 01:05:52.906694 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.906720 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.906730 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:52.906740 | orchestrator | 2025-09-03 01:05:52.906750 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-03 01:05:52.906760 | orchestrator | Wednesday 03 September 2025 01:00:01 +0000 (0:00:11.469) 0:02:31.638 *** 2025-09-03 01:05:52.906770 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.906779 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.906789 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.906799 | orchestrator | 2025-09-03 01:05:52.906809 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-03 01:05:52.906819 | orchestrator | Wednesday 03 September 2025 01:00:02 +0000 (0:00:01.023) 0:02:32.662 *** 2025-09-03 01:05:52.906828 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.906838 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.906848 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.906858 | orchestrator | 2025-09-03 01:05:52.906867 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-03 01:05:52.906877 | orchestrator | 2025-09-03 01:05:52.906908 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-03 01:05:52.906987 | orchestrator | Wednesday 03 September 2025 01:00:02 +0000 (0:00:00.463) 0:02:33.125 *** 2025-09-03 01:05:52.906996 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 01:05:52.907008 | orchestrator | 2025-09-03 01:05:52.907018 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-03 01:05:52.907028 | orchestrator | Wednesday 03 September 2025 01:00:03 +0000 (0:00:00.517) 0:02:33.642 *** 2025-09-03 01:05:52.907038 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-03 01:05:52.907056 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-03 01:05:52.907067 | orchestrator | 2025-09-03 01:05:52.907077 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-03 01:05:52.907087 | orchestrator | Wednesday 03 September 2025 01:00:06 +0000 (0:00:03.171) 0:02:36.813 *** 2025-09-03 01:05:52.907099 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-03 01:05:52.907114 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-03 01:05:52.907126 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-03 01:05:52.907138 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-03 01:05:52.907150 | orchestrator | 2025-09-03 01:05:52.907162 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-03 01:05:52.907173 | orchestrator | Wednesday 03 September 2025 01:00:12 +0000 (0:00:06.357) 0:02:43.171 *** 2025-09-03 01:05:52.907185 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-03 01:05:52.907197 | orchestrator | 2025-09-03 01:05:52.907208 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-03 01:05:52.907220 | orchestrator | Wednesday 03 September 2025 01:00:15 +0000 (0:00:03.160) 0:02:46.331 *** 2025-09-03 01:05:52.907231 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-03 01:05:52.907243 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-03 01:05:52.907254 | orchestrator | 2025-09-03 01:05:52.907266 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-03 01:05:52.907277 | orchestrator | Wednesday 03 September 2025 01:00:19 +0000 (0:00:03.665) 0:02:49.997 *** 2025-09-03 01:05:52.907286 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-03 01:05:52.907295 | orchestrator | 2025-09-03 01:05:52.907305 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-03 01:05:52.907322 | orchestrator | Wednesday 03 September 2025 01:00:22 +0000 (0:00:03.532) 0:02:53.529 *** 2025-09-03 01:05:52.907331 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-03 01:05:52.907341 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-03 01:05:52.907350 | orchestrator | 2025-09-03 01:05:52.907360 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-03 01:05:52.907439 | orchestrator | Wednesday 03 September 2025 01:00:30 +0000 (0:00:07.770) 0:03:01.300 *** 2025-09-03 01:05:52.907458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-03 01:05:52.907478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-03 01:05:52.907489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-03 01:05:52.907532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.907545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.907554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.907562 | orchestrator | 2025-09-03 01:05:52.907570 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-03 01:05:52.907578 | orchestrator | Wednesday 03 September 2025 01:00:32 +0000 (0:00:01.411) 0:03:02.711 *** 2025-09-03 01:05:52.907586 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.907594 | orchestrator | 2025-09-03 01:05:52.907602 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-03 01:05:52.907610 | orchestrator | Wednesday 03 September 2025 01:00:32 +0000 (0:00:00.189) 0:03:02.900 *** 2025-09-03 01:05:52.907617 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.907625 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.907633 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.907641 | orchestrator | 2025-09-03 01:05:52.907649 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-03 01:05:52.907657 | orchestrator | Wednesday 03 September 2025 01:00:32 +0000 (0:00:00.413) 0:03:03.314 *** 2025-09-03 01:05:52.907665 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-03 01:05:52.907672 | orchestrator | 2025-09-03 01:05:52.907680 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-03 01:05:52.907688 | orchestrator | Wednesday 03 September 2025 01:00:33 +0000 (0:00:00.536) 0:03:03.850 *** 2025-09-03 01:05:52.907700 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.907708 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.907716 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.907724 | orchestrator | 2025-09-03 01:05:52.907732 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-03 01:05:52.907740 | orchestrator | Wednesday 03 September 2025 01:00:33 +0000 (0:00:00.350) 0:03:04.200 *** 2025-09-03 01:05:52.907748 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 01:05:52.907756 | orchestrator | 2025-09-03 01:05:52.907764 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-03 01:05:52.907777 | orchestrator | Wednesday 03 September 2025 01:00:33 +0000 (0:00:00.378) 0:03:04.578 *** 2025-09-03 01:05:52.907786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-03 01:05:52.907818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-03 01:05:52.907833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-03 01:05:52.907843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.907857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.907886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.907896 | orchestrator | 2025-09-03 01:05:52.907904 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-03 01:05:52.907932 | orchestrator | Wednesday 03 September 2025 01:00:36 +0000 (0:00:02.250) 0:03:06.829 *** 2025-09-03 01:05:52.907945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-03 01:05:52.907955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.907963 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.907983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-03 01:05:52.908000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.908009 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.908044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-03 01:05:52.908055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.908064 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.908072 | orchestrator | 2025-09-03 01:05:52.908080 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-03 01:05:52.908088 | orchestrator | Wednesday 03 September 2025 01:00:37 +0000 (0:00:01.246) 0:03:08.076 *** 2025-09-03 01:05:52.908101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-03 01:05:52.908116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.908125 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.908158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-03 01:05:52.908168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.908177 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.908190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-03 01:05:52.908204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.908213 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.908221 | orchestrator | 2025-09-03 01:05:52.908229 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-03 01:05:52.908237 | orchestrator | Wednesday 03 September 2025 01:00:38 +0000 (0:00:01.038) 0:03:09.114 *** 2025-09-03 01:05:52.908267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-03 01:05:52.908278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-03 01:05:52.908297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-03 01:05:52.908307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.908339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.908349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.908357 | orchestrator | 2025-09-03 01:05:52.908365 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-03 01:05:52.908373 | orchestrator | Wednesday 03 September 2025 01:00:40 +0000 (0:00:02.362) 0:03:11.477 *** 2025-09-03 01:05:52.908381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-03 01:05:52.908400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-03 01:05:52.908433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-03 01:05:52.908443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.908451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.908465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.908474 | orchestrator | 2025-09-03 01:05:52.908482 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-03 01:05:52.908494 | orchestrator | Wednesday 03 September 2025 01:00:49 +0000 (0:00:08.694) 0:03:20.172 *** 2025-09-03 01:05:52.908502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-03 01:05:52.908532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.908541 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.908550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-03 01:05:52.908565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.908574 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.908586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-03 01:05:52.908596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.908604 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.908612 | orchestrator | 2025-09-03 01:05:52.908621 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-03 01:05:52.908629 | orchestrator | Wednesday 03 September 2025 01:00:50 +0000 (0:00:00.884) 0:03:21.057 *** 2025-09-03 01:05:52.908637 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:52.908645 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:05:52.908653 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:05:52.908661 | orchestrator | 2025-09-03 01:05:52.908690 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-03 01:05:52.908700 | orchestrator | Wednesday 03 September 2025 01:00:52 +0000 (0:00:02.083) 0:03:23.140 *** 2025-09-03 01:05:52.908708 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.908716 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.908723 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.908731 | orchestrator | 2025-09-03 01:05:52.908739 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-03 01:05:52.908747 | orchestrator | Wednesday 03 September 2025 01:00:52 +0000 (0:00:00.393) 0:03:23.534 *** 2025-09-03 01:05:52.908756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-03 01:05:52.908776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-03 01:05:52.908785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.908817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-03 01:05:52.908833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.908841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.908850 | orchestrator | 2025-09-03 01:05:52.908858 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-03 01:05:52.908866 | orchestrator | Wednesday 03 September 2025 01:00:55 +0000 (0:00:03.047) 0:03:26.582 *** 2025-09-03 01:05:52.908874 | orchestrator | 2025-09-03 01:05:52.908882 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-03 01:05:52.908889 | orchestrator | Wednesday 03 September 2025 01:00:56 +0000 (0:00:00.272) 0:03:26.855 *** 2025-09-03 01:05:52.908897 | orchestrator | 2025-09-03 01:05:52.908905 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-03 01:05:52.908934 | orchestrator | Wednesday 03 September 2025 01:00:56 +0000 (0:00:00.250) 0:03:27.105 *** 2025-09-03 01:05:52.908942 | orchestrator | 2025-09-03 01:05:52.908955 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-03 01:05:52.908963 | orchestrator | Wednesday 03 September 2025 01:00:56 +0000 (0:00:00.239) 0:03:27.345 *** 2025-09-03 01:05:52.908971 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:52.908979 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:05:52.908987 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:05:52.908995 | orchestrator | 2025-09-03 01:05:52.909003 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-03 01:05:52.909011 | orchestrator | Wednesday 03 September 2025 01:01:16 +0000 (0:00:19.608) 0:03:46.953 *** 2025-09-03 01:05:52.909019 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:52.909027 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:05:52.909035 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:05:52.909043 | orchestrator | 2025-09-03 01:05:52.909051 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-03 01:05:52.909059 | orchestrator | 2025-09-03 01:05:52.909067 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-03 01:05:52.909074 | orchestrator | Wednesday 03 September 2025 01:01:22 +0000 (0:00:06.222) 0:03:53.176 *** 2025-09-03 01:05:52.909083 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 01:05:52.909091 | orchestrator | 2025-09-03 01:05:52.909098 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-03 01:05:52.909106 | orchestrator | Wednesday 03 September 2025 01:01:23 +0000 (0:00:01.099) 0:03:54.275 *** 2025-09-03 01:05:52.909114 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:05:52.909122 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:05:52.909135 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:05:52.909143 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.909151 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.909159 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.909167 | orchestrator | 2025-09-03 01:05:52.909175 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-03 01:05:52.909183 | orchestrator | Wednesday 03 September 2025 01:01:24 +0000 (0:00:00.826) 0:03:55.101 *** 2025-09-03 01:05:52.909191 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.909199 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.909207 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.909215 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 01:05:52.909223 | orchestrator | 2025-09-03 01:05:52.909231 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-03 01:05:52.909263 | orchestrator | Wednesday 03 September 2025 01:01:25 +0000 (0:00:00.878) 0:03:55.980 *** 2025-09-03 01:05:52.909272 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-03 01:05:52.909281 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-03 01:05:52.909289 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-03 01:05:52.909297 | orchestrator | 2025-09-03 01:05:52.909305 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-03 01:05:52.909313 | orchestrator | Wednesday 03 September 2025 01:01:26 +0000 (0:00:00.985) 0:03:56.966 *** 2025-09-03 01:05:52.909321 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-03 01:05:52.909330 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-03 01:05:52.909338 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-03 01:05:52.909346 | orchestrator | 2025-09-03 01:05:52.909354 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-03 01:05:52.909362 | orchestrator | Wednesday 03 September 2025 01:01:27 +0000 (0:00:01.495) 0:03:58.461 *** 2025-09-03 01:05:52.909370 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-03 01:05:52.909378 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:05:52.909386 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-03 01:05:52.909394 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:05:52.909402 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-03 01:05:52.909410 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:05:52.909418 | orchestrator | 2025-09-03 01:05:52.909426 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-03 01:05:52.909434 | orchestrator | Wednesday 03 September 2025 01:01:29 +0000 (0:00:01.225) 0:03:59.687 *** 2025-09-03 01:05:52.909442 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-03 01:05:52.909450 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-03 01:05:52.909458 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-03 01:05:52.909465 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-03 01:05:52.909473 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.909482 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-03 01:05:52.909489 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-03 01:05:52.909497 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-03 01:05:52.909505 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-03 01:05:52.909513 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-03 01:05:52.909521 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.909529 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-03 01:05:52.909537 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-03 01:05:52.909550 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.909559 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-03 01:05:52.909571 | orchestrator | 2025-09-03 01:05:52.909579 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-03 01:05:52.909587 | orchestrator | Wednesday 03 September 2025 01:01:30 +0000 (0:00:01.467) 0:04:01.155 *** 2025-09-03 01:05:52.909595 | orchestrator | changed: [testbed-node-3] 2025-09-03 01:05:52.909603 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.909611 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.909619 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.909627 | orchestrator | changed: [testbed-node-4] 2025-09-03 01:05:52.909635 | orchestrator | changed: [testbed-node-5] 2025-09-03 01:05:52.909643 | orchestrator | 2025-09-03 01:05:52.909652 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-03 01:05:52.909660 | orchestrator | Wednesday 03 September 2025 01:01:32 +0000 (0:00:01.685) 0:04:02.841 *** 2025-09-03 01:05:52.909668 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.909676 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.909684 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.909692 | orchestrator | changed: [testbed-node-4] 2025-09-03 01:05:52.909700 | orchestrator | changed: [testbed-node-5] 2025-09-03 01:05:52.909708 | orchestrator | changed: [testbed-node-3] 2025-09-03 01:05:52.909716 | orchestrator | 2025-09-03 01:05:52.909724 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-03 01:05:52.909732 | orchestrator | Wednesday 03 September 2025 01:01:34 +0000 (0:00:01.838) 0:04:04.679 *** 2025-09-03 01:05:52.909740 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-03 01:05:52.909772 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-03 01:05:52.909783 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-03 01:05:52.909798 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.909811 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-03 01:05:52.909819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-03 01:05:52.909849 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.909859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-03 01:05:52.909867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.909881 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-03 01:05:52.909894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-03 01:05:52.909903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-03 01:05:52.909987 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.909998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.910006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.910048 | orchestrator | 2025-09-03 01:05:52.910058 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-03 01:05:52.910067 | orchestrator | Wednesday 03 September 2025 01:01:37 +0000 (0:00:03.638) 0:04:08.318 *** 2025-09-03 01:05:52.910075 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 01:05:52.910084 | orchestrator | 2025-09-03 01:05:52.910093 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-03 01:05:52.910101 | orchestrator | Wednesday 03 September 2025 01:01:40 +0000 (0:00:02.512) 0:04:10.831 *** 2025-09-03 01:05:52.910114 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-03 01:05:52.910123 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-03 01:05:52.910131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-03 01:05:52.910163 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-03 01:05:52.910179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-03 01:05:52.910187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-03 01:05:52.910200 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-03 01:05:52.910209 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-03 01:05:52.910217 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-03 01:05:52.910243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.910251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.910263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.910270 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.910284 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.910291 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.910298 | orchestrator | 2025-09-03 01:05:52.910305 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-03 01:05:52.910312 | orchestrator | Wednesday 03 September 2025 01:01:44 +0000 (0:00:04.407) 0:04:15.238 *** 2025-09-03 01:05:52.910336 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-03 01:05:52.910352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-03 01:05:52.910359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.910366 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:05:52.910377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-03 01:05:52.910385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-03 01:05:52.910409 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.910424 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:05:52.910431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-03 01:05:52.910438 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-03 01:05:52.910449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.910456 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:05:52.910463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-03 01:05:52.910470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.910477 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.910503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-03 01:05:52.910517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.910524 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.910531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-03 01:05:52.910538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.910545 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.910552 | orchestrator | 2025-09-03 01:05:52.910559 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-03 01:05:52.910569 | orchestrator | Wednesday 03 September 2025 01:01:46 +0000 (0:00:01.735) 0:04:16.973 *** 2025-09-03 01:05:52.910576 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-03 01:05:52.910583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-03 01:05:52.910614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.910622 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:05:52.910629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-03 01:05:52.910637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-03 01:05:52.910647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.910654 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:05:52.910661 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-03 01:05:52.910689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-03 01:05:52.910698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.910705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-03 01:05:52.910712 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:05:52.910719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.910726 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.910736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-03 01:05:52.910744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.910755 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.910763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-03 01:05:52.910786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.910795 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.910801 | orchestrator | 2025-09-03 01:05:52.910808 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-03 01:05:52.910815 | orchestrator | Wednesday 03 September 2025 01:01:48 +0000 (0:00:02.317) 0:04:19.291 *** 2025-09-03 01:05:52.910822 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.910829 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.910836 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.910843 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-03 01:05:52.910850 | orchestrator | 2025-09-03 01:05:52.910856 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-03 01:05:52.910863 | orchestrator | Wednesday 03 September 2025 01:01:51 +0000 (0:00:02.326) 0:04:21.618 *** 2025-09-03 01:05:52.910870 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-03 01:05:52.910876 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-03 01:05:52.910883 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-03 01:05:52.910890 | orchestrator | 2025-09-03 01:05:52.910897 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-03 01:05:52.910904 | orchestrator | Wednesday 03 September 2025 01:01:51 +0000 (0:00:00.671) 0:04:22.290 *** 2025-09-03 01:05:52.910927 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-03 01:05:52.910939 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-03 01:05:52.910949 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-03 01:05:52.910960 | orchestrator | 2025-09-03 01:05:52.910971 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-03 01:05:52.910981 | orchestrator | Wednesday 03 September 2025 01:01:52 +0000 (0:00:00.935) 0:04:23.225 *** 2025-09-03 01:05:52.910992 | orchestrator | ok: [testbed-node-3] 2025-09-03 01:05:52.911003 | orchestrator | ok: [testbed-node-5] 2025-09-03 01:05:52.911014 | orchestrator | ok: [testbed-node-4] 2025-09-03 01:05:52.911026 | orchestrator | 2025-09-03 01:05:52.911037 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-03 01:05:52.911048 | orchestrator | Wednesday 03 September 2025 01:01:53 +0000 (0:00:00.563) 0:04:23.788 *** 2025-09-03 01:05:52.911058 | orchestrator | ok: [testbed-node-3] 2025-09-03 01:05:52.911069 | orchestrator | ok: [testbed-node-4] 2025-09-03 01:05:52.911080 | orchestrator | ok: [testbed-node-5] 2025-09-03 01:05:52.911090 | orchestrator | 2025-09-03 01:05:52.911101 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-03 01:05:52.911119 | orchestrator | Wednesday 03 September 2025 01:01:53 +0000 (0:00:00.778) 0:04:24.567 *** 2025-09-03 01:05:52.911130 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-03 01:05:52.911145 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-03 01:05:52.911156 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-03 01:05:52.911167 | orchestrator | 2025-09-03 01:05:52.911179 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-03 01:05:52.911189 | orchestrator | Wednesday 03 September 2025 01:01:55 +0000 (0:00:01.072) 0:04:25.639 *** 2025-09-03 01:05:52.911200 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-03 01:05:52.911212 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-03 01:05:52.911223 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-03 01:05:52.911235 | orchestrator | 2025-09-03 01:05:52.911245 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-03 01:05:52.911256 | orchestrator | Wednesday 03 September 2025 01:01:56 +0000 (0:00:01.115) 0:04:26.755 *** 2025-09-03 01:05:52.911266 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-03 01:05:52.911276 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-03 01:05:52.911287 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-03 01:05:52.911297 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-03 01:05:52.911308 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-03 01:05:52.911318 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-03 01:05:52.911328 | orchestrator | 2025-09-03 01:05:52.911338 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-03 01:05:52.911350 | orchestrator | Wednesday 03 September 2025 01:02:00 +0000 (0:00:04.216) 0:04:30.972 *** 2025-09-03 01:05:52.911362 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:05:52.911373 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:05:52.911384 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:05:52.911391 | orchestrator | 2025-09-03 01:05:52.911398 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-03 01:05:52.911405 | orchestrator | Wednesday 03 September 2025 01:02:01 +0000 (0:00:00.690) 0:04:31.663 *** 2025-09-03 01:05:52.911412 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:05:52.911418 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:05:52.911425 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:05:52.911432 | orchestrator | 2025-09-03 01:05:52.911438 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-03 01:05:52.911445 | orchestrator | Wednesday 03 September 2025 01:02:01 +0000 (0:00:00.497) 0:04:32.160 *** 2025-09-03 01:05:52.911452 | orchestrator | changed: [testbed-node-3] 2025-09-03 01:05:52.911458 | orchestrator | changed: [testbed-node-4] 2025-09-03 01:05:52.911465 | orchestrator | changed: [testbed-node-5] 2025-09-03 01:05:52.911472 | orchestrator | 2025-09-03 01:05:52.911505 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-03 01:05:52.911513 | orchestrator | Wednesday 03 September 2025 01:02:02 +0000 (0:00:01.301) 0:04:33.462 *** 2025-09-03 01:05:52.911520 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-03 01:05:52.911528 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-03 01:05:52.911535 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-03 01:05:52.911542 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-03 01:05:52.911549 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-03 01:05:52.911561 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-03 01:05:52.911568 | orchestrator | 2025-09-03 01:05:52.911575 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-03 01:05:52.911582 | orchestrator | Wednesday 03 September 2025 01:02:05 +0000 (0:00:03.101) 0:04:36.563 *** 2025-09-03 01:05:52.911588 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-03 01:05:52.911595 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-03 01:05:52.911602 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-03 01:05:52.911609 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-03 01:05:52.911615 | orchestrator | changed: [testbed-node-3] 2025-09-03 01:05:52.911622 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-03 01:05:52.911629 | orchestrator | changed: [testbed-node-5] 2025-09-03 01:05:52.911636 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-03 01:05:52.911642 | orchestrator | changed: [testbed-node-4] 2025-09-03 01:05:52.911649 | orchestrator | 2025-09-03 01:05:52.911656 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-03 01:05:52.911662 | orchestrator | Wednesday 03 September 2025 01:02:09 +0000 (0:00:03.268) 0:04:39.832 *** 2025-09-03 01:05:52.911669 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:05:52.911676 | orchestrator | 2025-09-03 01:05:52.911683 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-03 01:05:52.911690 | orchestrator | Wednesday 03 September 2025 01:02:09 +0000 (0:00:00.130) 0:04:39.962 *** 2025-09-03 01:05:52.911696 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:05:52.911703 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:05:52.911710 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:05:52.911717 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.911723 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.911730 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.911737 | orchestrator | 2025-09-03 01:05:52.911743 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-03 01:05:52.911756 | orchestrator | Wednesday 03 September 2025 01:02:09 +0000 (0:00:00.571) 0:04:40.534 *** 2025-09-03 01:05:52.911763 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-03 01:05:52.911770 | orchestrator | 2025-09-03 01:05:52.911777 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-03 01:05:52.911783 | orchestrator | Wednesday 03 September 2025 01:02:10 +0000 (0:00:00.636) 0:04:41.170 *** 2025-09-03 01:05:52.911790 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:05:52.911797 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:05:52.911804 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:05:52.911810 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.911817 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.911824 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.911831 | orchestrator | 2025-09-03 01:05:52.911837 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-03 01:05:52.911844 | orchestrator | Wednesday 03 September 2025 01:02:11 +0000 (0:00:00.717) 0:04:41.888 *** 2025-09-03 01:05:52.911852 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-03 01:05:52.911870 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-03 01:05:52.911878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-03 01:05:52.911886 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-03 01:05:52.911897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-03 01:05:52.911904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-03 01:05:52.911927 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-03 01:05:52.911953 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-03 01:05:52.911965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.911973 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-03 01:05:52.911980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.911990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.911998 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.912015 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.912023 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.912030 | orchestrator | 2025-09-03 01:05:52.912037 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-03 01:05:52.912043 | orchestrator | Wednesday 03 September 2025 01:02:14 +0000 (0:00:03.680) 0:04:45.568 *** 2025-09-03 01:05:52.912051 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-03 01:05:52.912079 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-03 01:05:52.912087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-03 01:05:52.912098 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-03 01:05:52.912109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-03 01:05:52.912117 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-03 01:05:52.912124 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.912135 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.912146 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.912159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-03 01:05:52.912167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-03 01:05:52.912174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-03 01:05:52.912181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.912191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.912198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.912210 | orchestrator | 2025-09-03 01:05:52.912217 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-03 01:05:52.912224 | orchestrator | Wednesday 03 September 2025 01:02:21 +0000 (0:00:06.384) 0:04:51.953 *** 2025-09-03 01:05:52.912231 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:05:52.912237 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:05:52.912244 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:05:52.912251 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.912258 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.912264 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.912271 | orchestrator | 2025-09-03 01:05:52.912278 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-03 01:05:52.912285 | orchestrator | Wednesday 03 September 2025 01:02:22 +0000 (0:00:01.257) 0:04:53.210 *** 2025-09-03 01:05:52.912291 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-03 01:05:52.912298 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-03 01:05:52.912304 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-03 01:05:52.912311 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-03 01:05:52.912321 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-03 01:05:52.912328 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.912335 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-03 01:05:52.912341 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-03 01:05:52.912348 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-03 01:05:52.912355 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.912362 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-03 01:05:52.912368 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.912375 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-03 01:05:52.912382 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-03 01:05:52.912389 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-03 01:05:52.912396 | orchestrator | 2025-09-03 01:05:52.912402 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-03 01:05:52.912409 | orchestrator | Wednesday 03 September 2025 01:02:26 +0000 (0:00:03.717) 0:04:56.928 *** 2025-09-03 01:05:52.912416 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:05:52.912423 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:05:52.912429 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:05:52.912436 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.912443 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.912450 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.912456 | orchestrator | 2025-09-03 01:05:52.912463 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-03 01:05:52.912470 | orchestrator | Wednesday 03 September 2025 01:02:26 +0000 (0:00:00.617) 0:04:57.546 *** 2025-09-03 01:05:52.912476 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-03 01:05:52.912488 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-03 01:05:52.912495 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-03 01:05:52.912502 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-03 01:05:52.912509 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-03 01:05:52.912516 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-03 01:05:52.912526 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-03 01:05:52.912533 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-03 01:05:52.912540 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-03 01:05:52.912546 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-03 01:05:52.912553 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.912560 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-03 01:05:52.912567 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.912574 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-03 01:05:52.912580 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.912587 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-03 01:05:52.912593 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-03 01:05:52.912600 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-03 01:05:52.912607 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-03 01:05:52.912613 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-03 01:05:52.912620 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-03 01:05:52.912626 | orchestrator | 2025-09-03 01:05:52.912633 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-03 01:05:52.912640 | orchestrator | Wednesday 03 September 2025 01:02:33 +0000 (0:00:06.969) 0:05:04.515 *** 2025-09-03 01:05:52.912647 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-03 01:05:52.912653 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-03 01:05:52.912663 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-03 01:05:52.912670 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-03 01:05:52.912677 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-03 01:05:52.912684 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-03 01:05:52.912691 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-03 01:05:52.912697 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-03 01:05:52.912704 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-03 01:05:52.912715 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-03 01:05:52.912722 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-03 01:05:52.912728 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-03 01:05:52.912735 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.912742 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-03 01:05:52.912749 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-03 01:05:52.912756 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.912762 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-03 01:05:52.912769 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.912776 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-03 01:05:52.912783 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-03 01:05:52.912790 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-03 01:05:52.912796 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-03 01:05:52.912803 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-03 01:05:52.912810 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-03 01:05:52.912816 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-03 01:05:52.912823 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-03 01:05:52.912830 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-03 01:05:52.912836 | orchestrator | 2025-09-03 01:05:52.912843 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-03 01:05:52.912850 | orchestrator | Wednesday 03 September 2025 01:02:42 +0000 (0:00:08.294) 0:05:12.810 *** 2025-09-03 01:05:52.912862 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:05:52.912870 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:05:52.912876 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:05:52.912883 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.912890 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.912897 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.912904 | orchestrator | 2025-09-03 01:05:52.912951 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-03 01:05:52.912959 | orchestrator | Wednesday 03 September 2025 01:02:42 +0000 (0:00:00.691) 0:05:13.502 *** 2025-09-03 01:05:52.912966 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:05:52.912973 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:05:52.912980 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:05:52.912986 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.912993 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.913000 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.913007 | orchestrator | 2025-09-03 01:05:52.913013 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-03 01:05:52.913020 | orchestrator | Wednesday 03 September 2025 01:02:43 +0000 (0:00:00.533) 0:05:14.036 *** 2025-09-03 01:05:52.913027 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.913034 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.913040 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.913047 | orchestrator | changed: [testbed-node-3] 2025-09-03 01:05:52.913054 | orchestrator | changed: [testbed-node-5] 2025-09-03 01:05:52.913060 | orchestrator | changed: [testbed-node-4] 2025-09-03 01:05:52.913067 | orchestrator | 2025-09-03 01:05:52.913074 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-03 01:05:52.913086 | orchestrator | Wednesday 03 September 2025 01:02:45 +0000 (0:00:02.030) 0:05:16.066 *** 2025-09-03 01:05:52.913097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-03 01:05:52.913105 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-03 01:05:52.913112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-03 01:05:52.913119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-03 01:05:52.913130 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.913137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.913149 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:05:52.913156 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:05:52.913167 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-03 01:05:52.913175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-03 01:05:52.913182 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.913189 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:05:52.913200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-03 01:05:52.913207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.913219 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.913225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-03 01:05:52.913236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.913243 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.913249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-03 01:05:52.913256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-03 01:05:52.913263 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.913269 | orchestrator | 2025-09-03 01:05:52.913276 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-03 01:05:52.913282 | orchestrator | Wednesday 03 September 2025 01:02:46 +0000 (0:00:01.136) 0:05:17.202 *** 2025-09-03 01:05:52.913288 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-03 01:05:52.913295 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-03 01:05:52.913301 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:05:52.913307 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-03 01:05:52.913314 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-03 01:05:52.913320 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:05:52.913326 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-03 01:05:52.913336 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-03 01:05:52.913342 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:05:52.913352 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-03 01:05:52.913359 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-03 01:05:52.913365 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.913371 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-03 01:05:52.913378 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-03 01:05:52.913384 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.913390 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-03 01:05:52.913397 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-03 01:05:52.913403 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.913409 | orchestrator | 2025-09-03 01:05:52.913416 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-03 01:05:52.913422 | orchestrator | Wednesday 03 September 2025 01:02:47 +0000 (0:00:00.703) 0:05:17.906 *** 2025-09-03 01:05:52.913429 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-03 01:05:52.913440 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-03 01:05:52.913447 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-03 01:05:52.913453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-03 01:05:52.913468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-03 01:05:52.913475 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-03 01:05:52.913481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-03 01:05:52.913492 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-03 01:05:52.913499 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-03 01:05:52.913506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.913512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.913526 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.913533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.913543 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.913550 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-03 01:05:52.913557 | orchestrator | 2025-09-03 01:05:52.913563 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-03 01:05:52.913569 | orchestrator | Wednesday 03 September 2025 01:02:50 +0000 (0:00:02.718) 0:05:20.625 *** 2025-09-03 01:05:52.913576 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:05:52.913582 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:05:52.913588 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:05:52.913595 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.913601 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.913613 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.913619 | orchestrator | 2025-09-03 01:05:52.913625 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-03 01:05:52.913631 | orchestrator | Wednesday 03 September 2025 01:02:51 +0000 (0:00:01.156) 0:05:21.781 *** 2025-09-03 01:05:52.913638 | orchestrator | 2025-09-03 01:05:52.913644 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-03 01:05:52.913650 | orchestrator | Wednesday 03 September 2025 01:02:51 +0000 (0:00:00.143) 0:05:21.924 *** 2025-09-03 01:05:52.913656 | orchestrator | 2025-09-03 01:05:52.913663 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-03 01:05:52.913669 | orchestrator | Wednesday 03 September 2025 01:02:51 +0000 (0:00:00.155) 0:05:22.080 *** 2025-09-03 01:05:52.913675 | orchestrator | 2025-09-03 01:05:52.913681 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-03 01:05:52.913688 | orchestrator | Wednesday 03 September 2025 01:02:51 +0000 (0:00:00.176) 0:05:22.257 *** 2025-09-03 01:05:52.913694 | orchestrator | 2025-09-03 01:05:52.913700 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-03 01:05:52.913706 | orchestrator | Wednesday 03 September 2025 01:02:51 +0000 (0:00:00.175) 0:05:22.432 *** 2025-09-03 01:05:52.913712 | orchestrator | 2025-09-03 01:05:52.913719 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-03 01:05:52.913728 | orchestrator | Wednesday 03 September 2025 01:02:51 +0000 (0:00:00.142) 0:05:22.575 *** 2025-09-03 01:05:52.913734 | orchestrator | 2025-09-03 01:05:52.913741 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-03 01:05:52.913747 | orchestrator | Wednesday 03 September 2025 01:02:52 +0000 (0:00:00.336) 0:05:22.912 *** 2025-09-03 01:05:52.913753 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:52.913759 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:05:52.913766 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:05:52.913772 | orchestrator | 2025-09-03 01:05:52.913778 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-03 01:05:52.913785 | orchestrator | Wednesday 03 September 2025 01:03:04 +0000 (0:00:12.259) 0:05:35.171 *** 2025-09-03 01:05:52.913791 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:52.913797 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:05:52.913804 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:05:52.913810 | orchestrator | 2025-09-03 01:05:52.913816 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-03 01:05:52.913823 | orchestrator | Wednesday 03 September 2025 01:03:16 +0000 (0:00:12.055) 0:05:47.226 *** 2025-09-03 01:05:52.913829 | orchestrator | changed: [testbed-node-4] 2025-09-03 01:05:52.913835 | orchestrator | changed: [testbed-node-5] 2025-09-03 01:05:52.913841 | orchestrator | changed: [testbed-node-3] 2025-09-03 01:05:52.913848 | orchestrator | 2025-09-03 01:05:52.913854 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-03 01:05:52.913860 | orchestrator | Wednesday 03 September 2025 01:03:37 +0000 (0:00:21.285) 0:06:08.512 *** 2025-09-03 01:05:52.913867 | orchestrator | changed: [testbed-node-3] 2025-09-03 01:05:52.913873 | orchestrator | changed: [testbed-node-5] 2025-09-03 01:05:52.913879 | orchestrator | changed: [testbed-node-4] 2025-09-03 01:05:52.913886 | orchestrator | 2025-09-03 01:05:52.913892 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-03 01:05:52.913898 | orchestrator | Wednesday 03 September 2025 01:04:15 +0000 (0:00:37.939) 0:06:46.451 *** 2025-09-03 01:05:52.913904 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2025-09-03 01:05:52.913926 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-09-03 01:05:52.913938 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-09-03 01:05:52.913948 | orchestrator | changed: [testbed-node-3] 2025-09-03 01:05:52.913959 | orchestrator | changed: [testbed-node-4] 2025-09-03 01:05:52.913973 | orchestrator | changed: [testbed-node-5] 2025-09-03 01:05:52.913980 | orchestrator | 2025-09-03 01:05:52.913990 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-03 01:05:52.913997 | orchestrator | Wednesday 03 September 2025 01:04:22 +0000 (0:00:06.357) 0:06:52.809 *** 2025-09-03 01:05:52.914003 | orchestrator | changed: [testbed-node-3] 2025-09-03 01:05:52.914009 | orchestrator | changed: [testbed-node-4] 2025-09-03 01:05:52.914038 | orchestrator | changed: [testbed-node-5] 2025-09-03 01:05:52.914045 | orchestrator | 2025-09-03 01:05:52.914052 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-03 01:05:52.914058 | orchestrator | Wednesday 03 September 2025 01:04:23 +0000 (0:00:00.807) 0:06:53.617 *** 2025-09-03 01:05:52.914065 | orchestrator | changed: [testbed-node-3] 2025-09-03 01:05:52.914071 | orchestrator | changed: [testbed-node-4] 2025-09-03 01:05:52.914077 | orchestrator | changed: [testbed-node-5] 2025-09-03 01:05:52.914083 | orchestrator | 2025-09-03 01:05:52.914090 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-03 01:05:52.914096 | orchestrator | Wednesday 03 September 2025 01:04:45 +0000 (0:00:22.046) 0:07:15.663 *** 2025-09-03 01:05:52.914102 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:05:52.914109 | orchestrator | 2025-09-03 01:05:52.914115 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-03 01:05:52.914121 | orchestrator | Wednesday 03 September 2025 01:04:45 +0000 (0:00:00.106) 0:07:15.769 *** 2025-09-03 01:05:52.914127 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:05:52.914133 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:05:52.914140 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.914146 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.914152 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.914158 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-03 01:05:52.914165 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-03 01:05:52.914171 | orchestrator | 2025-09-03 01:05:52.914177 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-03 01:05:52.914183 | orchestrator | Wednesday 03 September 2025 01:05:08 +0000 (0:00:23.636) 0:07:39.406 *** 2025-09-03 01:05:52.914190 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.914196 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:05:52.914202 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:05:52.914208 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:05:52.914214 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.914220 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.914227 | orchestrator | 2025-09-03 01:05:52.914233 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-03 01:05:52.914239 | orchestrator | Wednesday 03 September 2025 01:05:16 +0000 (0:00:08.106) 0:07:47.513 *** 2025-09-03 01:05:52.914245 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.914251 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:05:52.914258 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:05:52.914264 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.914270 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.914276 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-09-03 01:05:52.914283 | orchestrator | 2025-09-03 01:05:52.914289 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-03 01:05:52.914295 | orchestrator | Wednesday 03 September 2025 01:05:19 +0000 (0:00:03.007) 0:07:50.521 *** 2025-09-03 01:05:52.914308 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-03 01:05:52.914314 | orchestrator | 2025-09-03 01:05:52.914320 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-03 01:05:52.914326 | orchestrator | Wednesday 03 September 2025 01:05:31 +0000 (0:00:11.388) 0:08:01.910 *** 2025-09-03 01:05:52.914338 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-03 01:05:52.914344 | orchestrator | 2025-09-03 01:05:52.914350 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-03 01:05:52.914356 | orchestrator | Wednesday 03 September 2025 01:05:32 +0000 (0:00:01.133) 0:08:03.043 *** 2025-09-03 01:05:52.914363 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:05:52.914369 | orchestrator | 2025-09-03 01:05:52.914375 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-03 01:05:52.914382 | orchestrator | Wednesday 03 September 2025 01:05:33 +0000 (0:00:01.245) 0:08:04.289 *** 2025-09-03 01:05:52.914388 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-03 01:05:52.914394 | orchestrator | 2025-09-03 01:05:52.914400 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-03 01:05:52.914407 | orchestrator | Wednesday 03 September 2025 01:05:43 +0000 (0:00:09.772) 0:08:14.061 *** 2025-09-03 01:05:52.914413 | orchestrator | ok: [testbed-node-3] 2025-09-03 01:05:52.914419 | orchestrator | ok: [testbed-node-4] 2025-09-03 01:05:52.914425 | orchestrator | ok: [testbed-node-5] 2025-09-03 01:05:52.914432 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:05:52.914438 | orchestrator | ok: [testbed-node-1] 2025-09-03 01:05:52.914445 | orchestrator | ok: [testbed-node-2] 2025-09-03 01:05:52.914451 | orchestrator | 2025-09-03 01:05:52.914457 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-03 01:05:52.914463 | orchestrator | 2025-09-03 01:05:52.914469 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-03 01:05:52.914476 | orchestrator | Wednesday 03 September 2025 01:05:45 +0000 (0:00:01.736) 0:08:15.798 *** 2025-09-03 01:05:52.914482 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:05:52.914488 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:05:52.914495 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:05:52.914501 | orchestrator | 2025-09-03 01:05:52.914507 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-03 01:05:52.914514 | orchestrator | 2025-09-03 01:05:52.914520 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-03 01:05:52.914526 | orchestrator | Wednesday 03 September 2025 01:05:46 +0000 (0:00:01.118) 0:08:16.916 *** 2025-09-03 01:05:52.914532 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.914539 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.914545 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.914552 | orchestrator | 2025-09-03 01:05:52.914562 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-03 01:05:52.914568 | orchestrator | 2025-09-03 01:05:52.914575 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-03 01:05:52.914581 | orchestrator | Wednesday 03 September 2025 01:05:46 +0000 (0:00:00.497) 0:08:17.414 *** 2025-09-03 01:05:52.914587 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-03 01:05:52.914594 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-03 01:05:52.914600 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-03 01:05:52.914607 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-03 01:05:52.914613 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-03 01:05:52.914619 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-03 01:05:52.914625 | orchestrator | skipping: [testbed-node-3] 2025-09-03 01:05:52.914632 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-03 01:05:52.914638 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-03 01:05:52.914644 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-03 01:05:52.914651 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-03 01:05:52.914657 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-03 01:05:52.914667 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-03 01:05:52.914674 | orchestrator | skipping: [testbed-node-4] 2025-09-03 01:05:52.914680 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-03 01:05:52.914686 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-03 01:05:52.914693 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-03 01:05:52.914699 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-03 01:05:52.914705 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-03 01:05:52.914711 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-03 01:05:52.914718 | orchestrator | skipping: [testbed-node-5] 2025-09-03 01:05:52.914724 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-03 01:05:52.914730 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-03 01:05:52.914737 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-03 01:05:52.914743 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-03 01:05:52.914749 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-03 01:05:52.914755 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-03 01:05:52.914762 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.914768 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-03 01:05:52.914774 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-03 01:05:52.914781 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-03 01:05:52.914787 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-03 01:05:52.914793 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-03 01:05:52.914803 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-03 01:05:52.914810 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.914816 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-03 01:05:52.914822 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-03 01:05:52.914829 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-03 01:05:52.914835 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-03 01:05:52.914841 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-03 01:05:52.914847 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-03 01:05:52.914854 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.914860 | orchestrator | 2025-09-03 01:05:52.914866 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-03 01:05:52.914873 | orchestrator | 2025-09-03 01:05:52.914879 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-03 01:05:52.914885 | orchestrator | Wednesday 03 September 2025 01:05:48 +0000 (0:00:01.246) 0:08:18.660 *** 2025-09-03 01:05:52.914891 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-03 01:05:52.914898 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-03 01:05:52.914904 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.914995 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-03 01:05:52.915015 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-03 01:05:52.915021 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.915028 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-03 01:05:52.915034 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-03 01:05:52.915040 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.915046 | orchestrator | 2025-09-03 01:05:52.915053 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-03 01:05:52.915059 | orchestrator | 2025-09-03 01:05:52.915065 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-03 01:05:52.915079 | orchestrator | Wednesday 03 September 2025 01:05:48 +0000 (0:00:00.702) 0:08:19.362 *** 2025-09-03 01:05:52.915085 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.915091 | orchestrator | 2025-09-03 01:05:52.915097 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-03 01:05:52.915104 | orchestrator | 2025-09-03 01:05:52.915110 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-03 01:05:52.915116 | orchestrator | Wednesday 03 September 2025 01:05:49 +0000 (0:00:00.625) 0:08:19.988 *** 2025-09-03 01:05:52.915122 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:05:52.915135 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:05:52.915142 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:05:52.915148 | orchestrator | 2025-09-03 01:05:52.915154 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 01:05:52.915161 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-03 01:05:52.915167 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-03 01:05:52.915175 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-03 01:05:52.915181 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-03 01:05:52.915187 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-03 01:05:52.915194 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-03 01:05:52.915200 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-09-03 01:05:52.915206 | orchestrator | 2025-09-03 01:05:52.915212 | orchestrator | 2025-09-03 01:05:52.915219 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 01:05:52.915225 | orchestrator | Wednesday 03 September 2025 01:05:49 +0000 (0:00:00.406) 0:08:20.394 *** 2025-09-03 01:05:52.915231 | orchestrator | =============================================================================== 2025-09-03 01:05:52.915237 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 37.94s 2025-09-03 01:05:52.915244 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.07s 2025-09-03 01:05:52.915250 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.64s 2025-09-03 01:05:52.915256 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 22.05s 2025-09-03 01:05:52.915262 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.39s 2025-09-03 01:05:52.915268 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.29s 2025-09-03 01:05:52.915274 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 19.61s 2025-09-03 01:05:52.915279 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.35s 2025-09-03 01:05:52.915284 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.26s 2025-09-03 01:05:52.915294 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 12.06s 2025-09-03 01:05:52.915300 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 11.78s 2025-09-03 01:05:52.915305 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.47s 2025-09-03 01:05:52.915311 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.39s 2025-09-03 01:05:52.915316 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.11s 2025-09-03 01:05:52.915325 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 9.77s 2025-09-03 01:05:52.915331 | orchestrator | nova-cell : Get a list of existing cells -------------------------------- 9.52s 2025-09-03 01:05:52.915336 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.31s 2025-09-03 01:05:52.915341 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 8.69s 2025-09-03 01:05:52.915347 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 8.29s 2025-09-03 01:05:52.915352 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.11s 2025-09-03 01:05:52.915358 | orchestrator | 2025-09-03 01:05:52 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:05:52.915363 | orchestrator | 2025-09-03 01:05:52 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:05:52.915369 | orchestrator | 2025-09-03 01:05:52 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:05:55.950789 | orchestrator | 2025-09-03 01:05:55 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:05:55.952189 | orchestrator | 2025-09-03 01:05:55 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:05:55.952221 | orchestrator | 2025-09-03 01:05:55 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:05:59.004409 | orchestrator | 2025-09-03 01:05:59 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:05:59.006959 | orchestrator | 2025-09-03 01:05:59 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:05:59.007271 | orchestrator | 2025-09-03 01:05:59 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:06:02.052993 | orchestrator | 2025-09-03 01:06:02 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:06:02.055268 | orchestrator | 2025-09-03 01:06:02 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state STARTED 2025-09-03 01:06:02.055337 | orchestrator | 2025-09-03 01:06:02 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:06:05.099222 | orchestrator | 2025-09-03 01:06:05 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:06:05.101213 | orchestrator | 2025-09-03 01:06:05 | INFO  | Task 1d321bf0-7329-4f26-b96d-940e598cfe39 is in state SUCCESS 2025-09-03 01:06:05.101671 | orchestrator | 2025-09-03 01:06:05 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:06:05.103703 | orchestrator | 2025-09-03 01:06:05.103742 | orchestrator | 2025-09-03 01:06:05.103755 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 01:06:05.103767 | orchestrator | 2025-09-03 01:06:05.103779 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 01:06:05.103791 | orchestrator | Wednesday 03 September 2025 01:03:44 +0000 (0:00:00.204) 0:00:00.204 *** 2025-09-03 01:06:05.103803 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:06:05.103816 | orchestrator | ok: [testbed-node-1] 2025-09-03 01:06:05.103828 | orchestrator | ok: [testbed-node-2] 2025-09-03 01:06:05.103840 | orchestrator | 2025-09-03 01:06:05.103852 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 01:06:05.103863 | orchestrator | Wednesday 03 September 2025 01:03:44 +0000 (0:00:00.237) 0:00:00.441 *** 2025-09-03 01:06:05.103875 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-03 01:06:05.103887 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-03 01:06:05.104011 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-03 01:06:05.104024 | orchestrator | 2025-09-03 01:06:05.104036 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-03 01:06:05.104502 | orchestrator | 2025-09-03 01:06:05.104522 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-03 01:06:05.104535 | orchestrator | Wednesday 03 September 2025 01:03:45 +0000 (0:00:00.375) 0:00:00.817 *** 2025-09-03 01:06:05.104546 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 01:06:05.104558 | orchestrator | 2025-09-03 01:06:05.104570 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-03 01:06:05.104581 | orchestrator | Wednesday 03 September 2025 01:03:45 +0000 (0:00:00.489) 0:00:01.306 *** 2025-09-03 01:06:05.104610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-03 01:06:05.104628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-03 01:06:05.104640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-03 01:06:05.104652 | orchestrator | 2025-09-03 01:06:05.104664 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-03 01:06:05.104675 | orchestrator | Wednesday 03 September 2025 01:03:46 +0000 (0:00:00.755) 0:00:02.062 *** 2025-09-03 01:06:05.104686 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-03 01:06:05.104699 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-03 01:06:05.104710 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-03 01:06:05.104722 | orchestrator | 2025-09-03 01:06:05.104733 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-03 01:06:05.104744 | orchestrator | Wednesday 03 September 2025 01:03:47 +0000 (0:00:00.826) 0:00:02.888 *** 2025-09-03 01:06:05.104776 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 01:06:05.104788 | orchestrator | 2025-09-03 01:06:05.104799 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-03 01:06:05.104810 | orchestrator | Wednesday 03 September 2025 01:03:48 +0000 (0:00:00.714) 0:00:03.602 *** 2025-09-03 01:06:05.104858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-03 01:06:05.104883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-03 01:06:05.104902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-03 01:06:05.104938 | orchestrator | 2025-09-03 01:06:05.104951 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-03 01:06:05.104963 | orchestrator | Wednesday 03 September 2025 01:03:49 +0000 (0:00:01.255) 0:00:04.858 *** 2025-09-03 01:06:05.104974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-03 01:06:05.104986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-03 01:06:05.104998 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:06:05.105010 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:06:05.105052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-03 01:06:05.105076 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:06:05.105091 | orchestrator | 2025-09-03 01:06:05.105104 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-03 01:06:05.105118 | orchestrator | Wednesday 03 September 2025 01:03:49 +0000 (0:00:00.356) 0:00:05.214 *** 2025-09-03 01:06:05.105131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-03 01:06:05.105151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-03 01:06:05.105165 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:06:05.105179 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:06:05.105192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-03 01:06:05.105206 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:06:05.105220 | orchestrator | 2025-09-03 01:06:05.105234 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-03 01:06:05.105247 | orchestrator | Wednesday 03 September 2025 01:03:50 +0000 (0:00:00.831) 0:00:06.045 *** 2025-09-03 01:06:05.105261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-03 01:06:05.105276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-03 01:06:05.105324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-03 01:06:05.105339 | orchestrator | 2025-09-03 01:06:05.105353 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-03 01:06:05.105366 | orchestrator | Wednesday 03 September 2025 01:03:51 +0000 (0:00:01.333) 0:00:07.379 *** 2025-09-03 01:06:05.105381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-03 01:06:05.105400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-03 01:06:05.105416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-03 01:06:05.105430 | orchestrator | 2025-09-03 01:06:05.105442 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-03 01:06:05.105453 | orchestrator | Wednesday 03 September 2025 01:03:53 +0000 (0:00:01.407) 0:00:08.786 *** 2025-09-03 01:06:05.105464 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:06:05.105475 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:06:05.105487 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:06:05.105498 | orchestrator | 2025-09-03 01:06:05.105509 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-03 01:06:05.105528 | orchestrator | Wednesday 03 September 2025 01:03:53 +0000 (0:00:00.507) 0:00:09.293 *** 2025-09-03 01:06:05.105539 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-03 01:06:05.105551 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-03 01:06:05.105562 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-03 01:06:05.105573 | orchestrator | 2025-09-03 01:06:05.105584 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-03 01:06:05.105595 | orchestrator | Wednesday 03 September 2025 01:03:55 +0000 (0:00:01.370) 0:00:10.664 *** 2025-09-03 01:06:05.105606 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-03 01:06:05.105617 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-03 01:06:05.105629 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-03 01:06:05.105640 | orchestrator | 2025-09-03 01:06:05.105651 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-03 01:06:05.105662 | orchestrator | Wednesday 03 September 2025 01:03:56 +0000 (0:00:01.264) 0:00:11.929 *** 2025-09-03 01:06:05.105699 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-03 01:06:05.105712 | orchestrator | 2025-09-03 01:06:05.105723 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-03 01:06:05.105734 | orchestrator | Wednesday 03 September 2025 01:03:57 +0000 (0:00:00.764) 0:00:12.693 *** 2025-09-03 01:06:05.105745 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-03 01:06:05.105756 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-03 01:06:05.105767 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:06:05.105778 | orchestrator | ok: [testbed-node-1] 2025-09-03 01:06:05.105790 | orchestrator | ok: [testbed-node-2] 2025-09-03 01:06:05.105801 | orchestrator | 2025-09-03 01:06:05.105812 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-03 01:06:05.105823 | orchestrator | Wednesday 03 September 2025 01:03:57 +0000 (0:00:00.691) 0:00:13.384 *** 2025-09-03 01:06:05.105834 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:06:05.105845 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:06:05.105856 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:06:05.105867 | orchestrator | 2025-09-03 01:06:05.105878 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-03 01:06:05.105889 | orchestrator | Wednesday 03 September 2025 01:03:58 +0000 (0:00:00.528) 0:00:13.913 *** 2025-09-03 01:06:05.105901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1053472, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7719476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.105983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1053472, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7719476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1053472, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7719476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1053560, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7831545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1053560, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7831545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1053560, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7831545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1053507, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7744706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1053507, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7744706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1053507, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7744706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1053564, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7870603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1053564, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7870603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1053564, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7870603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1053525, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7780733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1053525, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7780733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1053525, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7780733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1053549, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7814045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1053549, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7814045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1053549, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7814045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1053469, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7697096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1053469, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7697096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1053469, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7697096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1053487, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7728765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1053487, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7728765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1053487, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7728765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1053510, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.77476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1053510, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.77476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1053510, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.77476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1053536, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7796555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1053536, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7796555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1053536, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7796555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1053557, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7827568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1053557, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7827568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1053557, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7827568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1053497, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7739663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1053497, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7739663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1053497, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7739663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1053546, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7810676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1053546, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7810676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1053546, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7810676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1053532, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7789972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1053532, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7789972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1053532, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7789972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1053521, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7777824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1053521, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7777824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1053521, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7777824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1053516, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7763371, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1053516, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7763371, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1053516, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7763371, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1053539, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.780475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1053539, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.780475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1053539, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.780475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1053512, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.775084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1053512, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.775084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1053512, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.775084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1053554, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7824092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1053554, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7824092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1053554, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7824092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1053772, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8189056, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1053772, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8189056, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.106992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1053772, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8189056, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1053634, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.800228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1053634, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.800228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1053634, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.800228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1053605, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7905955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1053605, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7905955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1053605, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7905955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1053667, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8025625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1053667, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8025625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1053667, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8025625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1053588, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7878385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1053588, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7878385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1053588, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7878385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1053721, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8114283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1053721, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8114283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1053721, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8114283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1053676, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8088593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1053676, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8088593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1053676, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8088593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1053728, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8119814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1053728, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8119814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1053728, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8119814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1053760, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.817757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1053760, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.817757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1053760, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.817757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1053718, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.810457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1053718, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.810457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1053718, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.810457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1053662, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8013515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1053662, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8013515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1053662, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8013515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1053627, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7953753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1053627, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7953753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1053627, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7953753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1053657, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8008132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1053657, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8008132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1053657, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8008132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1053609, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7936041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1053609, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7936041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1053609, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7936041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1053664, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8020945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1053664, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8020945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1053664, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8020945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1053744, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8156304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1053744, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8156304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1053744, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8156304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1053736, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8135936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1053736, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8135936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1053736, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8135936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1053594, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7887595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1053594, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7887595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1053594, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7887595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1053599, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7894857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1053599, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7894857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1053599, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.7894857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1053711, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8098311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1053711, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8098311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1053711, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8098311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1053730, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8126438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1053730, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8126438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1053730, 'dev': 123, 'nlink': 1, 'atime': 1756857733.0, 'mtime': 1756857733.0, 'ctime': 1756858497.8126438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-03 01:06:05.107804 | orchestrator | 2025-09-03 01:06:05.107814 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-03 01:06:05.107824 | orchestrator | Wednesday 03 September 2025 01:04:35 +0000 (0:00:37.070) 0:00:50.984 *** 2025-09-03 01:06:05.107839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-03 01:06:05.107849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-03 01:06:05.107865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-03 01:06:05.107875 | orchestrator | 2025-09-03 01:06:05.107885 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-03 01:06:05.107895 | orchestrator | Wednesday 03 September 2025 01:04:36 +0000 (0:00:00.963) 0:00:51.947 *** 2025-09-03 01:06:05.107905 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:06:05.107968 | orchestrator | 2025-09-03 01:06:05.107979 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-03 01:06:05.107989 | orchestrator | Wednesday 03 September 2025 01:04:38 +0000 (0:00:02.168) 0:00:54.116 *** 2025-09-03 01:06:05.107998 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:06:05.108008 | orchestrator | 2025-09-03 01:06:05.108018 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-03 01:06:05.108028 | orchestrator | Wednesday 03 September 2025 01:04:40 +0000 (0:00:02.198) 0:00:56.315 *** 2025-09-03 01:06:05.108037 | orchestrator | 2025-09-03 01:06:05.108047 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-03 01:06:05.108062 | orchestrator | Wednesday 03 September 2025 01:04:40 +0000 (0:00:00.059) 0:00:56.375 *** 2025-09-03 01:06:05.108073 | orchestrator | 2025-09-03 01:06:05.108083 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-03 01:06:05.108092 | orchestrator | Wednesday 03 September 2025 01:04:40 +0000 (0:00:00.064) 0:00:56.439 *** 2025-09-03 01:06:05.108102 | orchestrator | 2025-09-03 01:06:05.108112 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-03 01:06:05.108122 | orchestrator | Wednesday 03 September 2025 01:04:41 +0000 (0:00:00.179) 0:00:56.618 *** 2025-09-03 01:06:05.108131 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:06:05.108141 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:06:05.108151 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:06:05.108161 | orchestrator | 2025-09-03 01:06:05.108171 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-03 01:06:05.108181 | orchestrator | Wednesday 03 September 2025 01:04:42 +0000 (0:00:01.863) 0:00:58.481 *** 2025-09-03 01:06:05.108190 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:06:05.108200 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:06:05.108210 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-03 01:06:05.108221 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-03 01:06:05.108231 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-03 01:06:05.108241 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:06:05.108250 | orchestrator | 2025-09-03 01:06:05.108260 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-03 01:06:05.108270 | orchestrator | Wednesday 03 September 2025 01:05:21 +0000 (0:00:38.311) 0:01:36.793 *** 2025-09-03 01:06:05.108280 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:06:05.108289 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:06:05.108299 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:06:05.108309 | orchestrator | 2025-09-03 01:06:05.108319 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-03 01:06:05.108335 | orchestrator | Wednesday 03 September 2025 01:05:56 +0000 (0:00:35.329) 0:02:12.123 *** 2025-09-03 01:06:05.108345 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:06:05.108355 | orchestrator | 2025-09-03 01:06:05.108370 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-03 01:06:05.108380 | orchestrator | Wednesday 03 September 2025 01:05:58 +0000 (0:00:02.140) 0:02:14.264 *** 2025-09-03 01:06:05.108390 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:06:05.108400 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:06:05.108410 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:06:05.108419 | orchestrator | 2025-09-03 01:06:05.108429 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-03 01:06:05.108439 | orchestrator | Wednesday 03 September 2025 01:05:59 +0000 (0:00:00.457) 0:02:14.721 *** 2025-09-03 01:06:05.108451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-03 01:06:05.108464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-03 01:06:05.108475 | orchestrator | 2025-09-03 01:06:05.108483 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-03 01:06:05.108491 | orchestrator | Wednesday 03 September 2025 01:06:01 +0000 (0:00:02.287) 0:02:17.009 *** 2025-09-03 01:06:05.108499 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:06:05.108507 | orchestrator | 2025-09-03 01:06:05.108516 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 01:06:05.108525 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-03 01:06:05.108534 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-03 01:06:05.108542 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-03 01:06:05.108550 | orchestrator | 2025-09-03 01:06:05.108558 | orchestrator | 2025-09-03 01:06:05.108566 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 01:06:05.108575 | orchestrator | Wednesday 03 September 2025 01:06:01 +0000 (0:00:00.272) 0:02:17.281 *** 2025-09-03 01:06:05.108583 | orchestrator | =============================================================================== 2025-09-03 01:06:05.108591 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.31s 2025-09-03 01:06:05.108599 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.07s 2025-09-03 01:06:05.108607 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 35.33s 2025-09-03 01:06:05.108615 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.29s 2025-09-03 01:06:05.108623 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.20s 2025-09-03 01:06:05.108634 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.17s 2025-09-03 01:06:05.108643 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.14s 2025-09-03 01:06:05.108651 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.86s 2025-09-03 01:06:05.108659 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.41s 2025-09-03 01:06:05.108667 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.37s 2025-09-03 01:06:05.108680 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.33s 2025-09-03 01:06:05.108688 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.26s 2025-09-03 01:06:05.108696 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.26s 2025-09-03 01:06:05.108704 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.96s 2025-09-03 01:06:05.108712 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.83s 2025-09-03 01:06:05.108720 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.83s 2025-09-03 01:06:05.108728 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.76s 2025-09-03 01:06:05.108736 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.76s 2025-09-03 01:06:05.108744 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.71s 2025-09-03 01:06:05.108752 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.69s 2025-09-03 01:06:08.146508 | orchestrator | 2025-09-03 01:06:08 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:06:08.146622 | orchestrator | 2025-09-03 01:06:08 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:06:11.194388 | orchestrator | 2025-09-03 01:06:11 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:06:11.194527 | orchestrator | 2025-09-03 01:06:11 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:06:14.239730 | orchestrator | 2025-09-03 01:06:14 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:06:14.239833 | orchestrator | 2025-09-03 01:06:14 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:06:17.283014 | orchestrator | 2025-09-03 01:06:17 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:06:17.283128 | orchestrator | 2025-09-03 01:06:17 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:06:20.316404 | orchestrator | 2025-09-03 01:06:20 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:06:20.316512 | orchestrator | 2025-09-03 01:06:20 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:06:23.359016 | orchestrator | 2025-09-03 01:06:23 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:06:23.359137 | orchestrator | 2025-09-03 01:06:23 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:06:26.400827 | orchestrator | 2025-09-03 01:06:26 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:06:26.400975 | orchestrator | 2025-09-03 01:06:26 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:06:29.451903 | orchestrator | 2025-09-03 01:06:29 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:06:29.452067 | orchestrator | 2025-09-03 01:06:29 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:06:32.498620 | orchestrator | 2025-09-03 01:06:32 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:06:32.498733 | orchestrator | 2025-09-03 01:06:32 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:06:35.544219 | orchestrator | 2025-09-03 01:06:35 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:06:35.544339 | orchestrator | 2025-09-03 01:06:35 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:06:38.588171 | orchestrator | 2025-09-03 01:06:38 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:06:38.588274 | orchestrator | 2025-09-03 01:06:38 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:06:41.629593 | orchestrator | 2025-09-03 01:06:41 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:06:41.629711 | orchestrator | 2025-09-03 01:06:41 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:06:44.673152 | orchestrator | 2025-09-03 01:06:44 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:06:44.673263 | orchestrator | 2025-09-03 01:06:44 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:06:47.713595 | orchestrator | 2025-09-03 01:06:47 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:06:47.713709 | orchestrator | 2025-09-03 01:06:47 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:06:50.761350 | orchestrator | 2025-09-03 01:06:50 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:06:50.762513 | orchestrator | 2025-09-03 01:06:50 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:06:53.810702 | orchestrator | 2025-09-03 01:06:53 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:06:53.810819 | orchestrator | 2025-09-03 01:06:53 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:06:56.858731 | orchestrator | 2025-09-03 01:06:56 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:06:56.858832 | orchestrator | 2025-09-03 01:06:56 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:06:59.907615 | orchestrator | 2025-09-03 01:06:59 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:06:59.907731 | orchestrator | 2025-09-03 01:06:59 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:07:02.952898 | orchestrator | 2025-09-03 01:07:02 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:07:02.953048 | orchestrator | 2025-09-03 01:07:02 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:07:05.993691 | orchestrator | 2025-09-03 01:07:05 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:07:05.993809 | orchestrator | 2025-09-03 01:07:05 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:07:09.029544 | orchestrator | 2025-09-03 01:07:09 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:07:09.029656 | orchestrator | 2025-09-03 01:07:09 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:07:12.070102 | orchestrator | 2025-09-03 01:07:12 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:07:12.070226 | orchestrator | 2025-09-03 01:07:12 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:07:15.102964 | orchestrator | 2025-09-03 01:07:15 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:07:15.103071 | orchestrator | 2025-09-03 01:07:15 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:07:18.154639 | orchestrator | 2025-09-03 01:07:18 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:07:18.154759 | orchestrator | 2025-09-03 01:07:18 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:07:21.201965 | orchestrator | 2025-09-03 01:07:21 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:07:21.202111 | orchestrator | 2025-09-03 01:07:21 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:07:24.247156 | orchestrator | 2025-09-03 01:07:24 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:07:24.247278 | orchestrator | 2025-09-03 01:07:24 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:07:27.288072 | orchestrator | 2025-09-03 01:07:27 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:07:27.288180 | orchestrator | 2025-09-03 01:07:27 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:07:30.329959 | orchestrator | 2025-09-03 01:07:30 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:07:30.330405 | orchestrator | 2025-09-03 01:07:30 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:07:33.376704 | orchestrator | 2025-09-03 01:07:33 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:07:33.376830 | orchestrator | 2025-09-03 01:07:33 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:07:36.415299 | orchestrator | 2025-09-03 01:07:36 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:07:36.415441 | orchestrator | 2025-09-03 01:07:36 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:07:39.462693 | orchestrator | 2025-09-03 01:07:39 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:07:39.462811 | orchestrator | 2025-09-03 01:07:39 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:07:42.511949 | orchestrator | 2025-09-03 01:07:42 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:07:42.512096 | orchestrator | 2025-09-03 01:07:42 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:07:45.562589 | orchestrator | 2025-09-03 01:07:45 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:07:45.562719 | orchestrator | 2025-09-03 01:07:45 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:07:48.606748 | orchestrator | 2025-09-03 01:07:48 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:07:48.606860 | orchestrator | 2025-09-03 01:07:48 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:07:51.645702 | orchestrator | 2025-09-03 01:07:51 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:07:51.645823 | orchestrator | 2025-09-03 01:07:51 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:07:54.694243 | orchestrator | 2025-09-03 01:07:54 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:07:54.694377 | orchestrator | 2025-09-03 01:07:54 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:07:57.739614 | orchestrator | 2025-09-03 01:07:57 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:07:57.739732 | orchestrator | 2025-09-03 01:07:57 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:08:00.784747 | orchestrator | 2025-09-03 01:08:00 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:08:00.784882 | orchestrator | 2025-09-03 01:08:00 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:08:03.836486 | orchestrator | 2025-09-03 01:08:03 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:08:03.836610 | orchestrator | 2025-09-03 01:08:03 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:08:06.880710 | orchestrator | 2025-09-03 01:08:06 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:08:06.880864 | orchestrator | 2025-09-03 01:08:06 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:08:09.927890 | orchestrator | 2025-09-03 01:08:09 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:08:09.928058 | orchestrator | 2025-09-03 01:08:09 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:08:12.972603 | orchestrator | 2025-09-03 01:08:12 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:08:12.972735 | orchestrator | 2025-09-03 01:08:12 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:08:16.033868 | orchestrator | 2025-09-03 01:08:16 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:08:16.034108 | orchestrator | 2025-09-03 01:08:16 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:08:19.087865 | orchestrator | 2025-09-03 01:08:19 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:08:19.088048 | orchestrator | 2025-09-03 01:08:19 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:08:22.120307 | orchestrator | 2025-09-03 01:08:22 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:08:22.120414 | orchestrator | 2025-09-03 01:08:22 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:08:25.160761 | orchestrator | 2025-09-03 01:08:25 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:08:25.160880 | orchestrator | 2025-09-03 01:08:25 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:08:28.204961 | orchestrator | 2025-09-03 01:08:28 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:08:28.205066 | orchestrator | 2025-09-03 01:08:28 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:08:31.244361 | orchestrator | 2025-09-03 01:08:31 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:08:31.244497 | orchestrator | 2025-09-03 01:08:31 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:08:34.292065 | orchestrator | 2025-09-03 01:08:34 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:08:34.292195 | orchestrator | 2025-09-03 01:08:34 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:08:37.332304 | orchestrator | 2025-09-03 01:08:37 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:08:37.332431 | orchestrator | 2025-09-03 01:08:37 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:08:40.377810 | orchestrator | 2025-09-03 01:08:40 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:08:40.377982 | orchestrator | 2025-09-03 01:08:40 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:08:43.424158 | orchestrator | 2025-09-03 01:08:43 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:08:43.424284 | orchestrator | 2025-09-03 01:08:43 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:08:46.471110 | orchestrator | 2025-09-03 01:08:46 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:08:46.471239 | orchestrator | 2025-09-03 01:08:46 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:08:49.512582 | orchestrator | 2025-09-03 01:08:49 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:08:49.512716 | orchestrator | 2025-09-03 01:08:49 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:08:52.558956 | orchestrator | 2025-09-03 01:08:52 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:08:52.559073 | orchestrator | 2025-09-03 01:08:52 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:08:55.602858 | orchestrator | 2025-09-03 01:08:55 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state STARTED 2025-09-03 01:08:55.603846 | orchestrator | 2025-09-03 01:08:55 | INFO  | Wait 1 second(s) until the next check 2025-09-03 01:08:58.645066 | orchestrator | 2025-09-03 01:08:58 | INFO  | Task 386c373b-52c5-46ef-99fe-921370775d27 is in state SUCCESS 2025-09-03 01:08:58.647413 | orchestrator | 2025-09-03 01:08:58.647468 | orchestrator | 2025-09-03 01:08:58.647489 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-03 01:08:58.647508 | orchestrator | 2025-09-03 01:08:58.647526 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-03 01:08:58.647544 | orchestrator | Wednesday 03 September 2025 01:04:24 +0000 (0:00:00.273) 0:00:00.273 *** 2025-09-03 01:08:58.647565 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:08:58.647589 | orchestrator | ok: [testbed-node-1] 2025-09-03 01:08:58.647608 | orchestrator | ok: [testbed-node-2] 2025-09-03 01:08:58.647620 | orchestrator | 2025-09-03 01:08:58.647632 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-03 01:08:58.647662 | orchestrator | Wednesday 03 September 2025 01:04:24 +0000 (0:00:00.347) 0:00:00.621 *** 2025-09-03 01:08:58.647673 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-03 01:08:58.647802 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-03 01:08:58.647819 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-03 01:08:58.647830 | orchestrator | 2025-09-03 01:08:58.647841 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-03 01:08:58.647852 | orchestrator | 2025-09-03 01:08:58.647863 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-03 01:08:58.647874 | orchestrator | Wednesday 03 September 2025 01:04:25 +0000 (0:00:00.532) 0:00:01.153 *** 2025-09-03 01:08:58.647885 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 01:08:58.647896 | orchestrator | 2025-09-03 01:08:58.647907 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-03 01:08:58.647951 | orchestrator | Wednesday 03 September 2025 01:04:25 +0000 (0:00:00.707) 0:00:01.861 *** 2025-09-03 01:08:58.647964 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-03 01:08:58.647975 | orchestrator | 2025-09-03 01:08:58.647987 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-03 01:08:58.647999 | orchestrator | Wednesday 03 September 2025 01:04:29 +0000 (0:00:03.326) 0:00:05.187 *** 2025-09-03 01:08:58.648014 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-03 01:08:58.648027 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-03 01:08:58.648041 | orchestrator | 2025-09-03 01:08:58.648054 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-03 01:08:58.648067 | orchestrator | Wednesday 03 September 2025 01:04:36 +0000 (0:00:06.794) 0:00:11.982 *** 2025-09-03 01:08:58.648081 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-03 01:08:58.648095 | orchestrator | 2025-09-03 01:08:58.648107 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-03 01:08:58.648120 | orchestrator | Wednesday 03 September 2025 01:04:39 +0000 (0:00:03.172) 0:00:15.154 *** 2025-09-03 01:08:58.648133 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-03 01:08:58.648146 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-03 01:08:58.648158 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-03 01:08:58.648171 | orchestrator | 2025-09-03 01:08:58.648184 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-03 01:08:58.648197 | orchestrator | Wednesday 03 September 2025 01:04:47 +0000 (0:00:08.323) 0:00:23.478 *** 2025-09-03 01:08:58.648210 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-03 01:08:58.648223 | orchestrator | 2025-09-03 01:08:58.648236 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-03 01:08:58.648278 | orchestrator | Wednesday 03 September 2025 01:04:50 +0000 (0:00:03.406) 0:00:26.885 *** 2025-09-03 01:08:58.648293 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-03 01:08:58.648770 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-03 01:08:58.648786 | orchestrator | 2025-09-03 01:08:58.648797 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-03 01:08:58.648809 | orchestrator | Wednesday 03 September 2025 01:04:58 +0000 (0:00:07.455) 0:00:34.341 *** 2025-09-03 01:08:58.648819 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-03 01:08:58.648830 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-03 01:08:58.648842 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-03 01:08:58.648853 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-03 01:08:58.648863 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-03 01:08:58.648874 | orchestrator | 2025-09-03 01:08:58.648885 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-03 01:08:58.648896 | orchestrator | Wednesday 03 September 2025 01:05:13 +0000 (0:00:15.244) 0:00:49.586 *** 2025-09-03 01:08:58.648907 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 01:08:58.648940 | orchestrator | 2025-09-03 01:08:58.648951 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-03 01:08:58.648962 | orchestrator | Wednesday 03 September 2025 01:05:14 +0000 (0:00:00.633) 0:00:50.219 *** 2025-09-03 01:08:58.648973 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:08:58.648984 | orchestrator | 2025-09-03 01:08:58.648996 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-09-03 01:08:58.649289 | orchestrator | Wednesday 03 September 2025 01:05:18 +0000 (0:00:04.687) 0:00:54.907 *** 2025-09-03 01:08:58.649307 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:08:58.649318 | orchestrator | 2025-09-03 01:08:58.649330 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-03 01:08:58.649378 | orchestrator | Wednesday 03 September 2025 01:05:23 +0000 (0:00:04.218) 0:00:59.125 *** 2025-09-03 01:08:58.649392 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:08:58.649403 | orchestrator | 2025-09-03 01:08:58.649414 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-09-03 01:08:58.649425 | orchestrator | Wednesday 03 September 2025 01:05:26 +0000 (0:00:03.001) 0:01:02.126 *** 2025-09-03 01:08:58.649436 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-03 01:08:58.649447 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-03 01:08:58.649459 | orchestrator | 2025-09-03 01:08:58.649469 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-09-03 01:08:58.649489 | orchestrator | Wednesday 03 September 2025 01:05:36 +0000 (0:00:09.978) 0:01:12.105 *** 2025-09-03 01:08:58.649501 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-09-03 01:08:58.649513 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-09-03 01:08:58.649526 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-09-03 01:08:58.649538 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-09-03 01:08:58.649549 | orchestrator | 2025-09-03 01:08:58.649560 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-09-03 01:08:58.649571 | orchestrator | Wednesday 03 September 2025 01:05:52 +0000 (0:00:16.202) 0:01:28.307 *** 2025-09-03 01:08:58.649582 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:08:58.649606 | orchestrator | 2025-09-03 01:08:58.649617 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-09-03 01:08:58.649628 | orchestrator | Wednesday 03 September 2025 01:05:56 +0000 (0:00:04.518) 0:01:32.826 *** 2025-09-03 01:08:58.649639 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:08:58.649650 | orchestrator | 2025-09-03 01:08:58.649661 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-09-03 01:08:58.649672 | orchestrator | Wednesday 03 September 2025 01:06:03 +0000 (0:00:06.326) 0:01:39.152 *** 2025-09-03 01:08:58.649683 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:08:58.649694 | orchestrator | 2025-09-03 01:08:58.649705 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-09-03 01:08:58.649716 | orchestrator | Wednesday 03 September 2025 01:06:03 +0000 (0:00:00.217) 0:01:39.370 *** 2025-09-03 01:08:58.649727 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:08:58.649738 | orchestrator | 2025-09-03 01:08:58.649748 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-03 01:08:58.649759 | orchestrator | Wednesday 03 September 2025 01:06:07 +0000 (0:00:04.245) 0:01:43.615 *** 2025-09-03 01:08:58.649770 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 01:08:58.649781 | orchestrator | 2025-09-03 01:08:58.649792 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-09-03 01:08:58.649803 | orchestrator | Wednesday 03 September 2025 01:06:08 +0000 (0:00:01.069) 0:01:44.685 *** 2025-09-03 01:08:58.649814 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:08:58.649825 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:08:58.649837 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:08:58.649848 | orchestrator | 2025-09-03 01:08:58.649859 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-09-03 01:08:58.649870 | orchestrator | Wednesday 03 September 2025 01:06:13 +0000 (0:00:04.844) 0:01:49.530 *** 2025-09-03 01:08:58.649881 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:08:58.649892 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:08:58.649903 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:08:58.649932 | orchestrator | 2025-09-03 01:08:58.649947 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-09-03 01:08:58.649960 | orchestrator | Wednesday 03 September 2025 01:06:17 +0000 (0:00:04.333) 0:01:53.863 *** 2025-09-03 01:08:58.649972 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:08:58.649985 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:08:58.649998 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:08:58.650011 | orchestrator | 2025-09-03 01:08:58.650075 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-09-03 01:08:58.650090 | orchestrator | Wednesday 03 September 2025 01:06:18 +0000 (0:00:00.809) 0:01:54.673 *** 2025-09-03 01:08:58.650103 | orchestrator | ok: [testbed-node-1] 2025-09-03 01:08:58.650116 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:08:58.650130 | orchestrator | ok: [testbed-node-2] 2025-09-03 01:08:58.650143 | orchestrator | 2025-09-03 01:08:58.650155 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-09-03 01:08:58.650168 | orchestrator | Wednesday 03 September 2025 01:06:20 +0000 (0:00:01.981) 0:01:56.654 *** 2025-09-03 01:08:58.650180 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:08:58.650193 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:08:58.650207 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:08:58.650219 | orchestrator | 2025-09-03 01:08:58.650232 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-09-03 01:08:58.650245 | orchestrator | Wednesday 03 September 2025 01:06:22 +0000 (0:00:01.317) 0:01:57.971 *** 2025-09-03 01:08:58.650259 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:08:58.650273 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:08:58.650284 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:08:58.650296 | orchestrator | 2025-09-03 01:08:58.650327 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-09-03 01:08:58.650347 | orchestrator | Wednesday 03 September 2025 01:06:23 +0000 (0:00:01.181) 0:01:59.153 *** 2025-09-03 01:08:58.650366 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:08:58.650383 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:08:58.650410 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:08:58.650445 | orchestrator | 2025-09-03 01:08:58.650580 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-09-03 01:08:58.650617 | orchestrator | Wednesday 03 September 2025 01:06:25 +0000 (0:00:01.954) 0:02:01.108 *** 2025-09-03 01:08:58.650650 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:08:58.650683 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:08:58.650720 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:08:58.650750 | orchestrator | 2025-09-03 01:08:58.650777 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-09-03 01:08:58.650802 | orchestrator | Wednesday 03 September 2025 01:06:26 +0000 (0:00:01.450) 0:02:02.558 *** 2025-09-03 01:08:58.650827 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:08:58.650866 | orchestrator | ok: [testbed-node-1] 2025-09-03 01:08:58.650892 | orchestrator | ok: [testbed-node-2] 2025-09-03 01:08:58.650960 | orchestrator | 2025-09-03 01:08:58.650988 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-09-03 01:08:58.651013 | orchestrator | Wednesday 03 September 2025 01:06:27 +0000 (0:00:00.866) 0:02:03.424 *** 2025-09-03 01:08:58.651037 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:08:58.651064 | orchestrator | ok: [testbed-node-1] 2025-09-03 01:08:58.651097 | orchestrator | ok: [testbed-node-2] 2025-09-03 01:08:58.651130 | orchestrator | 2025-09-03 01:08:58.651163 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-03 01:08:58.651199 | orchestrator | Wednesday 03 September 2025 01:06:30 +0000 (0:00:02.646) 0:02:06.070 *** 2025-09-03 01:08:58.651232 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 01:08:58.651267 | orchestrator | 2025-09-03 01:08:58.651289 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-09-03 01:08:58.651307 | orchestrator | Wednesday 03 September 2025 01:06:30 +0000 (0:00:00.543) 0:02:06.614 *** 2025-09-03 01:08:58.651324 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:08:58.651342 | orchestrator | 2025-09-03 01:08:58.651360 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-03 01:08:58.651378 | orchestrator | Wednesday 03 September 2025 01:06:34 +0000 (0:00:04.254) 0:02:10.868 *** 2025-09-03 01:08:58.651397 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:08:58.651417 | orchestrator | 2025-09-03 01:08:58.651436 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-09-03 01:08:58.651455 | orchestrator | Wednesday 03 September 2025 01:06:37 +0000 (0:00:02.975) 0:02:13.843 *** 2025-09-03 01:08:58.651471 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-03 01:08:58.651483 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-03 01:08:58.651494 | orchestrator | 2025-09-03 01:08:58.651505 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-09-03 01:08:58.651516 | orchestrator | Wednesday 03 September 2025 01:06:44 +0000 (0:00:06.382) 0:02:20.226 *** 2025-09-03 01:08:58.651526 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:08:58.651537 | orchestrator | 2025-09-03 01:08:58.651548 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-09-03 01:08:58.651558 | orchestrator | Wednesday 03 September 2025 01:06:47 +0000 (0:00:03.295) 0:02:23.521 *** 2025-09-03 01:08:58.651569 | orchestrator | ok: [testbed-node-0] 2025-09-03 01:08:58.651580 | orchestrator | ok: [testbed-node-1] 2025-09-03 01:08:58.651591 | orchestrator | ok: [testbed-node-2] 2025-09-03 01:08:58.651601 | orchestrator | 2025-09-03 01:08:58.651613 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-09-03 01:08:58.651624 | orchestrator | Wednesday 03 September 2025 01:06:47 +0000 (0:00:00.307) 0:02:23.828 *** 2025-09-03 01:08:58.651663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-03 01:08:58.651744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-03 01:08:58.651767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-03 01:08:58.651781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-03 01:08:58.651793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-03 01:08:58.651805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-03 01:08:58.651825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.651838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.651881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.651901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.651943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.651956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.651976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:08:58.651988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:08:58.652000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:08:58.652011 | orchestrator | 2025-09-03 01:08:58.652022 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-09-03 01:08:58.652034 | orchestrator | Wednesday 03 September 2025 01:06:50 +0000 (0:00:02.438) 0:02:26.267 *** 2025-09-03 01:08:58.652045 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:08:58.652057 | orchestrator | 2025-09-03 01:08:58.652098 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-09-03 01:08:58.652111 | orchestrator | Wednesday 03 September 2025 01:06:50 +0000 (0:00:00.127) 0:02:26.394 *** 2025-09-03 01:08:58.652122 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:08:58.652134 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:08:58.652145 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:08:58.652156 | orchestrator | 2025-09-03 01:08:58.652167 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-09-03 01:08:58.652178 | orchestrator | Wednesday 03 September 2025 01:06:50 +0000 (0:00:00.502) 0:02:26.897 *** 2025-09-03 01:08:58.652195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-03 01:08:58.652207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-03 01:08:58.652226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-03 01:08:58.652238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-03 01:08:58.652249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:08:58.652261 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:08:58.652309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-03 01:08:58.652323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-03 01:08:58.652334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-03 01:08:58.652354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-03 01:08:58.652366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:08:58.652377 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:08:58.652389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-03 01:08:58.652432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-03 01:08:58.652446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-03 01:08:58.652458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-03 01:08:58.652482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:08:58.652494 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:08:58.652505 | orchestrator | 2025-09-03 01:08:58.652516 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-03 01:08:58.652527 | orchestrator | Wednesday 03 September 2025 01:06:51 +0000 (0:00:00.639) 0:02:27.536 *** 2025-09-03 01:08:58.652538 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-03 01:08:58.652554 | orchestrator | 2025-09-03 01:08:58.652566 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-09-03 01:08:58.652616 | orchestrator | Wednesday 03 September 2025 01:06:52 +0000 (0:00:00.536) 0:02:28.072 *** 2025-09-03 01:08:58.652629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-03 01:08:58.652676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-03 01:08:58.652695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-03 01:08:58.652715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-03 01:08:58.652727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-03 01:08:58.652738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-03 01:08:58.652750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.652761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.652785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.652805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.652817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.652828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.652840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:08:58.652852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:08:58.652871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:08:58.652883 | orchestrator | 2025-09-03 01:08:58.652894 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-09-03 01:08:58.652905 | orchestrator | Wednesday 03 September 2025 01:06:57 +0000 (0:00:05.143) 0:02:33.216 *** 2025-09-03 01:08:58.652946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-03 01:08:58.652966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-03 01:08:58.652978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-03 01:08:58.652990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-03 01:08:58.653001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:08:58.653012 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:08:58.653030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-03 01:08:58.653054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-03 01:08:58.653066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-03 01:08:58.653078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-03 01:08:58.653089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:08:58.653100 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:08:58.653112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-03 01:08:58.653130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-03 01:08:58.653153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-03 01:08:58.653165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-03 01:08:58.653176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:08:58.653188 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:08:58.653199 | orchestrator | 2025-09-03 01:08:58.653210 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-09-03 01:08:58.653221 | orchestrator | Wednesday 03 September 2025 01:06:58 +0000 (0:00:00.907) 0:02:34.124 *** 2025-09-03 01:08:58.653233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-03 01:08:58.653245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-03 01:08:58.653257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-03 01:08:58.653286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-03 01:08:58.653298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:08:58.653310 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:08:58.653321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-03 01:08:58.653333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-03 01:08:58.653345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-03 01:08:58.653356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-03 01:08:58.653382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:08:58.653394 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:08:58.653411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-03 01:08:58.653423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-03 01:08:58.653434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-03 01:08:58.653446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-03 01:08:58.653457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-03 01:08:58.653476 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:08:58.653487 | orchestrator | 2025-09-03 01:08:58.653498 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-09-03 01:08:58.653509 | orchestrator | Wednesday 03 September 2025 01:06:59 +0000 (0:00:00.854) 0:02:34.979 *** 2025-09-03 01:08:58.653535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-03 01:08:58.653548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-03 01:08:58.653560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-03 01:08:58.653571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-03 01:08:58.653590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-03 01:08:58.653602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-03 01:08:58.653624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.653637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.653648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.653660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.653672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.653690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.653709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:08:58.653730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:08:58.653742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:08:58.653753 | orchestrator | 2025-09-03 01:08:58.653764 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-09-03 01:08:58.653776 | orchestrator | Wednesday 03 September 2025 01:07:04 +0000 (0:00:05.127) 0:02:40.106 *** 2025-09-03 01:08:58.653787 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-03 01:08:58.653798 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-03 01:08:58.653809 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-03 01:08:58.653820 | orchestrator | 2025-09-03 01:08:58.653831 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-09-03 01:08:58.653843 | orchestrator | Wednesday 03 September 2025 01:07:06 +0000 (0:00:02.064) 0:02:42.170 *** 2025-09-03 01:08:58.653854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-03 01:08:58.653873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-03 01:08:58.653898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-03 01:08:58.653925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-03 01:08:58.653938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-03 01:08:58.653949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-03 01:08:58.653961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.653980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.653991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.654064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.654080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.654092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.654103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:08:58.654122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:08:58.654134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:08:58.654145 | orchestrator | 2025-09-03 01:08:58.654156 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-09-03 01:08:58.654167 | orchestrator | Wednesday 03 September 2025 01:07:22 +0000 (0:00:15.884) 0:02:58.055 *** 2025-09-03 01:08:58.654179 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:08:58.654190 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:08:58.654201 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:08:58.654212 | orchestrator | 2025-09-03 01:08:58.654223 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-09-03 01:08:58.654234 | orchestrator | Wednesday 03 September 2025 01:07:23 +0000 (0:00:01.505) 0:02:59.560 *** 2025-09-03 01:08:58.654245 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-03 01:08:58.654256 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-03 01:08:58.654272 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-03 01:08:58.654284 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-03 01:08:58.654295 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-03 01:08:58.654306 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-03 01:08:58.654317 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-03 01:08:58.654328 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-03 01:08:58.654339 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-03 01:08:58.654356 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-03 01:08:58.654367 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-03 01:08:58.654378 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-03 01:08:58.654389 | orchestrator | 2025-09-03 01:08:58.654400 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-09-03 01:08:58.654411 | orchestrator | Wednesday 03 September 2025 01:07:28 +0000 (0:00:05.232) 0:03:04.793 *** 2025-09-03 01:08:58.654422 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-03 01:08:58.654433 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-03 01:08:58.654444 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-03 01:08:58.654455 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-03 01:08:58.654466 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-03 01:08:58.654484 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-03 01:08:58.654495 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-03 01:08:58.654505 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-03 01:08:58.654516 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-03 01:08:58.654527 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-03 01:08:58.654538 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-03 01:08:58.654549 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-03 01:08:58.654560 | orchestrator | 2025-09-03 01:08:58.654571 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-09-03 01:08:58.654582 | orchestrator | Wednesday 03 September 2025 01:07:34 +0000 (0:00:05.296) 0:03:10.090 *** 2025-09-03 01:08:58.654593 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-03 01:08:58.654604 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-03 01:08:58.654615 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-03 01:08:58.654626 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-03 01:08:58.654637 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-03 01:08:58.654647 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-03 01:08:58.654658 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-03 01:08:58.654669 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-03 01:08:58.654680 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-03 01:08:58.654691 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-03 01:08:58.654702 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-03 01:08:58.654713 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-03 01:08:58.654724 | orchestrator | 2025-09-03 01:08:58.654734 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-09-03 01:08:58.654745 | orchestrator | Wednesday 03 September 2025 01:07:39 +0000 (0:00:05.196) 0:03:15.286 *** 2025-09-03 01:08:58.654757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-03 01:08:58.654782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-03 01:08:58.654803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-03 01:08:58.654815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-03 01:08:58.654826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-03 01:08:58.654838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-03 01:08:58.654849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.654866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.654883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.654902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.654967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.654981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-03 01:08:58.654993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:08:58.655004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:08:58.655023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-03 01:08:58.655041 | orchestrator | 2025-09-03 01:08:58.655051 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-03 01:08:58.655062 | orchestrator | Wednesday 03 September 2025 01:07:42 +0000 (0:00:03.527) 0:03:18.814 *** 2025-09-03 01:08:58.655071 | orchestrator | skipping: [testbed-node-0] 2025-09-03 01:08:58.655081 | orchestrator | skipping: [testbed-node-1] 2025-09-03 01:08:58.655091 | orchestrator | skipping: [testbed-node-2] 2025-09-03 01:08:58.655101 | orchestrator | 2025-09-03 01:08:58.655115 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-09-03 01:08:58.655125 | orchestrator | Wednesday 03 September 2025 01:07:43 +0000 (0:00:00.309) 0:03:19.124 *** 2025-09-03 01:08:58.655135 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:08:58.655145 | orchestrator | 2025-09-03 01:08:58.655154 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-09-03 01:08:58.655164 | orchestrator | Wednesday 03 September 2025 01:07:45 +0000 (0:00:02.041) 0:03:21.165 *** 2025-09-03 01:08:58.655174 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:08:58.655184 | orchestrator | 2025-09-03 01:08:58.655193 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-09-03 01:08:58.655203 | orchestrator | Wednesday 03 September 2025 01:07:47 +0000 (0:00:01.988) 0:03:23.153 *** 2025-09-03 01:08:58.655213 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:08:58.655222 | orchestrator | 2025-09-03 01:08:58.655238 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-09-03 01:08:58.655255 | orchestrator | Wednesday 03 September 2025 01:07:49 +0000 (0:00:02.106) 0:03:25.260 *** 2025-09-03 01:08:58.655272 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:08:58.655285 | orchestrator | 2025-09-03 01:08:58.655302 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-09-03 01:08:58.655320 | orchestrator | Wednesday 03 September 2025 01:07:51 +0000 (0:00:02.114) 0:03:27.374 *** 2025-09-03 01:08:58.655336 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:08:58.655351 | orchestrator | 2025-09-03 01:08:58.655361 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-03 01:08:58.655371 | orchestrator | Wednesday 03 September 2025 01:08:12 +0000 (0:00:21.076) 0:03:48.451 *** 2025-09-03 01:08:58.655381 | orchestrator | 2025-09-03 01:08:58.655391 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-03 01:08:58.655400 | orchestrator | Wednesday 03 September 2025 01:08:12 +0000 (0:00:00.064) 0:03:48.516 *** 2025-09-03 01:08:58.655410 | orchestrator | 2025-09-03 01:08:58.655419 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-03 01:08:58.655429 | orchestrator | Wednesday 03 September 2025 01:08:12 +0000 (0:00:00.063) 0:03:48.579 *** 2025-09-03 01:08:58.655439 | orchestrator | 2025-09-03 01:08:58.655448 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-09-03 01:08:58.655458 | orchestrator | Wednesday 03 September 2025 01:08:12 +0000 (0:00:00.060) 0:03:48.640 *** 2025-09-03 01:08:58.655467 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:08:58.655477 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:08:58.655487 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:08:58.655496 | orchestrator | 2025-09-03 01:08:58.655506 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-09-03 01:08:58.655516 | orchestrator | Wednesday 03 September 2025 01:08:24 +0000 (0:00:11.608) 0:04:00.249 *** 2025-09-03 01:08:58.655525 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:08:58.655535 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:08:58.655545 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:08:58.655554 | orchestrator | 2025-09-03 01:08:58.655564 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-09-03 01:08:58.655574 | orchestrator | Wednesday 03 September 2025 01:08:35 +0000 (0:00:11.626) 0:04:11.876 *** 2025-09-03 01:08:58.655591 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:08:58.655600 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:08:58.655610 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:08:58.655620 | orchestrator | 2025-09-03 01:08:58.655629 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-09-03 01:08:58.655639 | orchestrator | Wednesday 03 September 2025 01:08:41 +0000 (0:00:05.057) 0:04:16.933 *** 2025-09-03 01:08:58.655649 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:08:58.655658 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:08:58.655668 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:08:58.655678 | orchestrator | 2025-09-03 01:08:58.655688 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-09-03 01:08:58.655697 | orchestrator | Wednesday 03 September 2025 01:08:49 +0000 (0:00:08.297) 0:04:25.230 *** 2025-09-03 01:08:58.655707 | orchestrator | changed: [testbed-node-1] 2025-09-03 01:08:58.655716 | orchestrator | changed: [testbed-node-2] 2025-09-03 01:08:58.655726 | orchestrator | changed: [testbed-node-0] 2025-09-03 01:08:58.655736 | orchestrator | 2025-09-03 01:08:58.655745 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-03 01:08:58.655756 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-03 01:08:58.655765 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-03 01:08:58.655775 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-03 01:08:58.655785 | orchestrator | 2025-09-03 01:08:58.655794 | orchestrator | 2025-09-03 01:08:58.655804 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-03 01:08:58.655814 | orchestrator | Wednesday 03 September 2025 01:08:57 +0000 (0:00:08.489) 0:04:33.720 *** 2025-09-03 01:08:58.655829 | orchestrator | =============================================================================== 2025-09-03 01:08:58.655839 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.08s 2025-09-03 01:08:58.655849 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.20s 2025-09-03 01:08:58.655859 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.88s 2025-09-03 01:08:58.655869 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.24s 2025-09-03 01:08:58.655879 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.63s 2025-09-03 01:08:58.655897 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.61s 2025-09-03 01:08:58.655907 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.98s 2025-09-03 01:08:58.655935 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 8.49s 2025-09-03 01:08:58.655945 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.32s 2025-09-03 01:08:58.655955 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.30s 2025-09-03 01:08:58.655965 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.46s 2025-09-03 01:08:58.655975 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.79s 2025-09-03 01:08:58.655984 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.38s 2025-09-03 01:08:58.655994 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 6.33s 2025-09-03 01:08:58.656004 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.30s 2025-09-03 01:08:58.656014 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.23s 2025-09-03 01:08:58.656023 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.20s 2025-09-03 01:08:58.656041 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.14s 2025-09-03 01:08:58.656050 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.13s 2025-09-03 01:08:58.656060 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.06s 2025-09-03 01:08:58.656070 | orchestrator | 2025-09-03 01:08:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-03 01:09:01.690408 | orchestrator | 2025-09-03 01:09:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-03 01:09:04.732846 | orchestrator | 2025-09-03 01:09:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-03 01:09:07.777789 | orchestrator | 2025-09-03 01:09:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-03 01:09:10.826085 | orchestrator | 2025-09-03 01:09:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-03 01:09:13.870318 | orchestrator | 2025-09-03 01:09:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-03 01:09:16.914724 | orchestrator | 2025-09-03 01:09:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-03 01:09:19.962673 | orchestrator | 2025-09-03 01:09:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-03 01:09:23.006692 | orchestrator | 2025-09-03 01:09:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-03 01:09:26.047268 | orchestrator | 2025-09-03 01:09:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-03 01:09:29.088505 | orchestrator | 2025-09-03 01:09:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-03 01:09:32.126708 | orchestrator | 2025-09-03 01:09:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-03 01:09:35.162417 | orchestrator | 2025-09-03 01:09:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-03 01:09:38.208955 | orchestrator | 2025-09-03 01:09:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-03 01:09:41.249231 | orchestrator | 2025-09-03 01:09:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-03 01:09:44.294891 | orchestrator | 2025-09-03 01:09:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-03 01:09:47.336592 | orchestrator | 2025-09-03 01:09:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-03 01:09:50.379354 | orchestrator | 2025-09-03 01:09:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-03 01:09:53.418529 | orchestrator | 2025-09-03 01:09:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-03 01:09:56.455767 | orchestrator | 2025-09-03 01:09:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-03 01:09:59.494569 | orchestrator | 2025-09-03 01:09:59.706586 | orchestrator | 2025-09-03 01:09:59.709470 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Wed Sep 3 01:09:59 UTC 2025 2025-09-03 01:09:59.709503 | orchestrator | 2025-09-03 01:10:00.208648 | orchestrator | ok: Runtime: 0:32:44.295992 2025-09-03 01:10:00.485333 | 2025-09-03 01:10:00.485533 | TASK [Bootstrap services] 2025-09-03 01:10:01.255042 | orchestrator | 2025-09-03 01:10:01.255227 | orchestrator | # BOOTSTRAP 2025-09-03 01:10:01.255248 | orchestrator | 2025-09-03 01:10:01.255262 | orchestrator | + set -e 2025-09-03 01:10:01.255275 | orchestrator | + echo 2025-09-03 01:10:01.255290 | orchestrator | + echo '# BOOTSTRAP' 2025-09-03 01:10:01.255309 | orchestrator | + echo 2025-09-03 01:10:01.255354 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-03 01:10:01.264676 | orchestrator | + set -e 2025-09-03 01:10:01.264700 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-03 01:10:05.108968 | orchestrator | 2025-09-03 01:10:05 | INFO  | It takes a moment until task f4e5e1cb-3cdf-4caa-b8de-3bb5a44b8d14 (flavor-manager) has been started and output is visible here. 2025-09-03 01:10:08.576133 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2025-09-03 01:10:08.576230 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:179 │ 2025-09-03 01:10:08.576255 | orchestrator | │ in run │ 2025-09-03 01:10:08.576269 | orchestrator | │ │ 2025-09-03 01:10:08.576280 | orchestrator | │ 176 │ logger.add(sys.stderr, format=log_fmt, level=level, colorize=True) │ 2025-09-03 01:10:08.576301 | orchestrator | │ 177 │ │ 2025-09-03 01:10:08.576312 | orchestrator | │ 178 │ definitions = get_flavor_definitions(name, url) │ 2025-09-03 01:10:08.576325 | orchestrator | │ ❱ 179 │ manager = FlavorManager( │ 2025-09-03 01:10:08.576337 | orchestrator | │ 180 │ │ cloud=Cloud(cloud), definitions=definitions, recommended=recom │ 2025-09-03 01:10:08.576348 | orchestrator | │ 181 │ ) │ 2025-09-03 01:10:08.576359 | orchestrator | │ 182 │ manager.run() │ 2025-09-03 01:10:08.576370 | orchestrator | │ │ 2025-09-03 01:10:08.576383 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-09-03 01:10:08.576405 | orchestrator | │ │ cloud = 'admin' │ │ 2025-09-03 01:10:08.576416 | orchestrator | │ │ debug = False │ │ 2025-09-03 01:10:08.576428 | orchestrator | │ │ definitions = { │ │ 2025-09-03 01:10:08.576439 | orchestrator | │ │ │ 'reference': [ │ │ 2025-09-03 01:10:08.576450 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-09-03 01:10:08.576461 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-09-03 01:10:08.576472 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-09-03 01:10:08.576483 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-09-03 01:10:08.576494 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-09-03 01:10:08.576506 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-09-03 01:10:08.576517 | orchestrator | │ │ │ ], │ │ 2025-09-03 01:10:08.576527 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-09-03 01:10:08.576538 | orchestrator | │ │ │ │ { │ │ 2025-09-03 01:10:08.576550 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-09-03 01:10:08.576581 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-03 01:10:08.576593 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-03 01:10:08.576604 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-03 01:10:08.576615 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-03 01:10:08.576626 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-03 01:10:08.576637 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-09-03 01:10:08.576648 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-09-03 01:10:08.576659 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-03 01:10:08.576671 | orchestrator | │ │ │ │ }, │ │ 2025-09-03 01:10:08.576682 | orchestrator | │ │ │ │ { │ │ 2025-09-03 01:10:08.576693 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-09-03 01:10:08.576703 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-03 01:10:08.576714 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-03 01:10:08.576725 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-03 01:10:08.576736 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-03 01:10:08.576763 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-03 01:10:08.576775 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-09-03 01:10:08.576786 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-09-03 01:10:08.576797 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-03 01:10:08.576808 | orchestrator | │ │ │ │ }, │ │ 2025-09-03 01:10:08.576819 | orchestrator | │ │ │ │ { │ │ 2025-09-03 01:10:08.576830 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-09-03 01:10:08.576846 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-03 01:10:08.576857 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-03 01:10:08.576868 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-03 01:10:08.576878 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-03 01:10:08.576890 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-03 01:10:08.576901 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-09-03 01:10:08.576912 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-09-03 01:10:08.576943 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-03 01:10:08.576954 | orchestrator | │ │ │ │ }, │ │ 2025-09-03 01:10:08.576965 | orchestrator | │ │ │ │ { │ │ 2025-09-03 01:10:08.576976 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-09-03 01:10:08.576987 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-03 01:10:08.577006 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-03 01:10:08.577017 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-03 01:10:08.577028 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-03 01:10:08.577039 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-03 01:10:08.577050 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-09-03 01:10:08.577060 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-09-03 01:10:08.577071 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-03 01:10:08.577082 | orchestrator | │ │ │ │ }, │ │ 2025-09-03 01:10:08.577093 | orchestrator | │ │ │ │ { │ │ 2025-09-03 01:10:08.577104 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-09-03 01:10:08.577115 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-03 01:10:08.577126 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-03 01:10:08.577137 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-03 01:10:08.577148 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-03 01:10:08.577159 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-03 01:10:08.577169 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-09-03 01:10:08.577180 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-09-03 01:10:08.577191 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-03 01:10:08.577202 | orchestrator | │ │ │ │ }, │ │ 2025-09-03 01:10:08.577213 | orchestrator | │ │ │ │ { │ │ 2025-09-03 01:10:08.577224 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-09-03 01:10:08.577235 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-03 01:10:08.577251 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-03 01:10:08.577263 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-03 01:10:08.577281 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-03 01:10:08.606937 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-03 01:10:08.606974 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-09-03 01:10:08.606987 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-09-03 01:10:08.606998 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-03 01:10:08.607009 | orchestrator | │ │ │ │ }, │ │ 2025-09-03 01:10:08.607020 | orchestrator | │ │ │ │ { │ │ 2025-09-03 01:10:08.607031 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-09-03 01:10:08.607041 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-03 01:10:08.607065 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-03 01:10:08.607077 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-03 01:10:08.607088 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-03 01:10:08.607099 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-03 01:10:08.607110 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-09-03 01:10:08.607121 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-09-03 01:10:08.607132 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-03 01:10:08.607143 | orchestrator | │ │ │ │ }, │ │ 2025-09-03 01:10:08.607153 | orchestrator | │ │ │ │ { │ │ 2025-09-03 01:10:08.607164 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-09-03 01:10:08.607175 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-03 01:10:08.607186 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-03 01:10:08.607197 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-09-03 01:10:08.607207 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-03 01:10:08.607219 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-03 01:10:08.607229 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-09-03 01:10:08.607240 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-09-03 01:10:08.607251 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-03 01:10:08.607262 | orchestrator | │ │ │ │ }, │ │ 2025-09-03 01:10:08.607273 | orchestrator | │ │ │ │ { │ │ 2025-09-03 01:10:08.607284 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-09-03 01:10:08.607295 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-03 01:10:08.607307 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-03 01:10:08.607318 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-03 01:10:08.607329 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-03 01:10:08.607340 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-03 01:10:08.607350 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-09-03 01:10:08.607361 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-09-03 01:10:08.607372 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-03 01:10:08.607391 | orchestrator | │ │ │ │ }, │ │ 2025-09-03 01:10:08.607402 | orchestrator | │ │ │ │ { │ │ 2025-09-03 01:10:08.607413 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-09-03 01:10:08.607424 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-03 01:10:08.607441 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-03 01:10:08.607452 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-03 01:10:08.607473 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-03 01:10:08.607485 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-03 01:10:08.607495 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-09-03 01:10:08.607507 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-09-03 01:10:08.607517 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-03 01:10:08.607528 | orchestrator | │ │ │ │ }, │ │ 2025-09-03 01:10:08.607539 | orchestrator | │ │ │ │ ... +19 │ │ 2025-09-03 01:10:08.607550 | orchestrator | │ │ │ ] │ │ 2025-09-03 01:10:08.607561 | orchestrator | │ │ } │ │ 2025-09-03 01:10:08.607572 | orchestrator | │ │ level = 'INFO' │ │ 2025-09-03 01:10:08.607582 | orchestrator | │ │ log_fmt = '{time:YYYY-MM-DD HH:mm:ss} | │ │ 2025-09-03 01:10:08.607593 | orchestrator | │ │ {level: <8} | '+17 │ │ 2025-09-03 01:10:08.607604 | orchestrator | │ │ name = 'local' │ │ 2025-09-03 01:10:08.607615 | orchestrator | │ │ recommended = True │ │ 2025-09-03 01:10:08.607626 | orchestrator | │ │ url = None │ │ 2025-09-03 01:10:08.607638 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-09-03 01:10:08.607651 | orchestrator | │ │ 2025-09-03 01:10:08.607662 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:97 │ 2025-09-03 01:10:08.607673 | orchestrator | │ in __init__ │ 2025-09-03 01:10:08.607684 | orchestrator | │ │ 2025-09-03 01:10:08.607695 | orchestrator | │ 94 │ │ self.required_flavors = definitions["mandatory"] │ 2025-09-03 01:10:08.607706 | orchestrator | │ 95 │ │ self.cloud = cloud │ 2025-09-03 01:10:08.607717 | orchestrator | │ 96 │ │ if recommended: │ 2025-09-03 01:10:08.607727 | orchestrator | │ ❱ 97 │ │ │ self.required_flavors = self.required_flavors + definition │ 2025-09-03 01:10:08.607738 | orchestrator | │ 98 │ │ │ 2025-09-03 01:10:08.607749 | orchestrator | │ 99 │ │ self.defaults_dict = {} │ 2025-09-03 01:10:08.607760 | orchestrator | │ 100 │ │ for item in definitions["reference"]: │ 2025-09-03 01:10:08.607770 | orchestrator | │ │ 2025-09-03 01:10:08.607787 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-09-03 01:10:08.607799 | orchestrator | │ │ cloud = │ │ 2025-09-03 01:10:08.607827 | orchestrator | │ │ definitions = { │ │ 2025-09-03 01:10:08.607837 | orchestrator | │ │ │ 'reference': [ │ │ 2025-09-03 01:10:08.607848 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-09-03 01:10:08.607859 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-09-03 01:10:08.607870 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-09-03 01:10:08.607881 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-09-03 01:10:08.607892 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-09-03 01:10:08.607903 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-09-03 01:10:08.607929 | orchestrator | │ │ │ ], │ │ 2025-09-03 01:10:08.607940 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-09-03 01:10:08.607951 | orchestrator | │ │ │ │ { │ │ 2025-09-03 01:10:08.607968 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-09-03 01:10:08.636676 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-03 01:10:08.636707 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-03 01:10:08.636718 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-03 01:10:08.636729 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-03 01:10:08.636740 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-03 01:10:08.636751 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-09-03 01:10:08.636763 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-09-03 01:10:08.636774 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-03 01:10:08.636784 | orchestrator | │ │ │ │ }, │ │ 2025-09-03 01:10:08.636795 | orchestrator | │ │ │ │ { │ │ 2025-09-03 01:10:08.636806 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-09-03 01:10:08.636817 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-03 01:10:08.636828 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-03 01:10:08.636838 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-03 01:10:08.636849 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-03 01:10:08.636860 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-03 01:10:08.636871 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-09-03 01:10:08.636882 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-09-03 01:10:08.636893 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-03 01:10:08.636903 | orchestrator | │ │ │ │ }, │ │ 2025-09-03 01:10:08.636947 | orchestrator | │ │ │ │ { │ │ 2025-09-03 01:10:08.636960 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-09-03 01:10:08.636971 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-03 01:10:08.636981 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-03 01:10:08.636992 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-03 01:10:08.637003 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-03 01:10:08.637014 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-03 01:10:08.637025 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-09-03 01:10:08.637036 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-09-03 01:10:08.637046 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-03 01:10:08.637064 | orchestrator | │ │ │ │ }, │ │ 2025-09-03 01:10:08.637075 | orchestrator | │ │ │ │ { │ │ 2025-09-03 01:10:08.637086 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-09-03 01:10:08.637097 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-03 01:10:08.637108 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-03 01:10:08.637119 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-03 01:10:08.637129 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-03 01:10:08.637140 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-03 01:10:08.637151 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-09-03 01:10:08.637162 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-09-03 01:10:08.637173 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-03 01:10:08.637184 | orchestrator | │ │ │ │ }, │ │ 2025-09-03 01:10:08.637195 | orchestrator | │ │ │ │ { │ │ 2025-09-03 01:10:08.637214 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-09-03 01:10:08.637226 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-03 01:10:08.637237 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-03 01:10:08.637248 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-03 01:10:08.637259 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-03 01:10:08.637270 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-03 01:10:08.637281 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-09-03 01:10:08.637291 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-09-03 01:10:08.637302 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-03 01:10:08.637313 | orchestrator | │ │ │ │ }, │ │ 2025-09-03 01:10:08.637331 | orchestrator | │ │ │ │ { │ │ 2025-09-03 01:10:08.637342 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-09-03 01:10:08.637353 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-03 01:10:08.637364 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-03 01:10:08.637375 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-03 01:10:08.637386 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-03 01:10:08.637396 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-03 01:10:08.637407 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-09-03 01:10:08.637418 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-09-03 01:10:08.637429 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-03 01:10:08.637440 | orchestrator | │ │ │ │ }, │ │ 2025-09-03 01:10:08.637451 | orchestrator | │ │ │ │ { │ │ 2025-09-03 01:10:08.637461 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-09-03 01:10:08.637472 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-03 01:10:08.637483 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-03 01:10:08.637494 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-03 01:10:08.637505 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-03 01:10:08.637515 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-03 01:10:08.637528 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-09-03 01:10:08.637539 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-09-03 01:10:08.637550 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-03 01:10:08.637560 | orchestrator | │ │ │ │ }, │ │ 2025-09-03 01:10:08.637571 | orchestrator | │ │ │ │ { │ │ 2025-09-03 01:10:08.637582 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-09-03 01:10:08.637593 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-03 01:10:08.637604 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-03 01:10:08.637615 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-09-03 01:10:08.637626 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-03 01:10:08.637637 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-03 01:10:08.637648 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-09-03 01:10:08.637658 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-09-03 01:10:08.637669 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-03 01:10:08.637680 | orchestrator | │ │ │ │ }, │ │ 2025-09-03 01:10:08.637702 | orchestrator | │ │ │ │ { │ │ 2025-09-03 01:10:08.706137 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-09-03 01:10:08.706224 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-03 01:10:08.706239 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-03 01:10:08.706251 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-03 01:10:08.706262 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-03 01:10:08.706273 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-03 01:10:08.706284 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-09-03 01:10:08.706295 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-09-03 01:10:08.706305 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-03 01:10:08.706316 | orchestrator | │ │ │ │ }, │ │ 2025-09-03 01:10:08.706327 | orchestrator | │ │ │ │ { │ │ 2025-09-03 01:10:08.706338 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-09-03 01:10:08.706348 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-03 01:10:08.706359 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-03 01:10:08.706370 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-03 01:10:08.706381 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-03 01:10:08.706392 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-03 01:10:08.706403 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-09-03 01:10:08.706413 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-09-03 01:10:08.706424 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-03 01:10:08.706435 | orchestrator | │ │ │ │ }, │ │ 2025-09-03 01:10:08.706446 | orchestrator | │ │ │ │ ... +19 │ │ 2025-09-03 01:10:08.706457 | orchestrator | │ │ │ ] │ │ 2025-09-03 01:10:08.706468 | orchestrator | │ │ } │ │ 2025-09-03 01:10:08.706478 | orchestrator | │ │ recommended = True │ │ 2025-09-03 01:10:08.706489 | orchestrator | │ │ self = │ │ 2025-09-03 01:10:08.706512 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-09-03 01:10:08.706525 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2025-09-03 01:10:08.706537 | orchestrator | KeyError: 'recommended' 2025-09-03 01:10:09.130428 | orchestrator | ERROR 2025-09-03 01:10:09.130627 | orchestrator | { 2025-09-03 01:10:09.130665 | orchestrator | "delta": "0:00:08.171814", 2025-09-03 01:10:09.130689 | orchestrator | "end": "2025-09-03 01:10:09.025250", 2025-09-03 01:10:09.130710 | orchestrator | "msg": "non-zero return code", 2025-09-03 01:10:09.130730 | orchestrator | "rc": 1, 2025-09-03 01:10:09.130748 | orchestrator | "start": "2025-09-03 01:10:00.853436" 2025-09-03 01:10:09.130767 | orchestrator | } failure 2025-09-03 01:10:09.141340 | 2025-09-03 01:10:09.141436 | PLAY RECAP 2025-09-03 01:10:09.141505 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-09-03 01:10:09.141649 | 2025-09-03 01:10:09.363002 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-03 01:10:09.364129 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-03 01:10:10.112595 | 2025-09-03 01:10:10.112760 | PLAY [Post output play] 2025-09-03 01:10:10.129233 | 2025-09-03 01:10:10.129382 | LOOP [stage-output : Register sources] 2025-09-03 01:10:10.211228 | 2025-09-03 01:10:10.211565 | TASK [stage-output : Check sudo] 2025-09-03 01:10:11.056672 | orchestrator | sudo: a password is required 2025-09-03 01:10:11.255674 | orchestrator | ok: Runtime: 0:00:00.014945 2025-09-03 01:10:11.271630 | 2025-09-03 01:10:11.271785 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-03 01:10:11.303570 | 2025-09-03 01:10:11.303747 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-03 01:10:11.395280 | orchestrator | ok 2025-09-03 01:10:11.401104 | 2025-09-03 01:10:11.401207 | LOOP [stage-output : Ensure target folders exist] 2025-09-03 01:10:11.835930 | orchestrator | ok: "docs" 2025-09-03 01:10:11.836382 | 2025-09-03 01:10:12.061783 | orchestrator | ok: "artifacts" 2025-09-03 01:10:12.292647 | orchestrator | ok: "logs" 2025-09-03 01:10:12.309490 | 2025-09-03 01:10:12.309646 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-03 01:10:12.344903 | 2025-09-03 01:10:12.345159 | TASK [stage-output : Make all log files readable] 2025-09-03 01:10:12.604138 | orchestrator | ok 2025-09-03 01:10:12.612299 | 2025-09-03 01:10:12.612425 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-03 01:10:12.646866 | orchestrator | skipping: Conditional result was False 2025-09-03 01:10:12.661421 | 2025-09-03 01:10:12.661575 | TASK [stage-output : Discover log files for compression] 2025-09-03 01:10:12.686247 | orchestrator | skipping: Conditional result was False 2025-09-03 01:10:12.701268 | 2025-09-03 01:10:12.701417 | LOOP [stage-output : Archive everything from logs] 2025-09-03 01:10:12.746761 | 2025-09-03 01:10:12.746958 | PLAY [Post cleanup play] 2025-09-03 01:10:12.755616 | 2025-09-03 01:10:12.755717 | TASK [Set cloud fact (Zuul deployment)] 2025-09-03 01:10:12.820608 | orchestrator | ok 2025-09-03 01:10:12.831951 | 2025-09-03 01:10:12.832084 | TASK [Set cloud fact (local deployment)] 2025-09-03 01:10:12.865706 | orchestrator | skipping: Conditional result was False 2025-09-03 01:10:12.883261 | 2025-09-03 01:10:12.883412 | TASK [Clean the cloud environment] 2025-09-03 01:10:13.389261 | orchestrator | 2025-09-03 01:10:13 - clean up servers 2025-09-03 01:10:14.201723 | orchestrator | 2025-09-03 01:10:14 - testbed-manager 2025-09-03 01:10:14.290295 | orchestrator | 2025-09-03 01:10:14 - testbed-node-1 2025-09-03 01:10:14.374174 | orchestrator | 2025-09-03 01:10:14 - testbed-node-2 2025-09-03 01:10:14.456840 | orchestrator | 2025-09-03 01:10:14 - testbed-node-4 2025-09-03 01:10:14.553329 | orchestrator | 2025-09-03 01:10:14 - testbed-node-5 2025-09-03 01:10:14.641565 | orchestrator | 2025-09-03 01:10:14 - testbed-node-0 2025-09-03 01:10:14.761791 | orchestrator | 2025-09-03 01:10:14 - testbed-node-3 2025-09-03 01:10:14.857622 | orchestrator | 2025-09-03 01:10:14 - clean up keypairs 2025-09-03 01:10:14.872681 | orchestrator | 2025-09-03 01:10:14 - testbed 2025-09-03 01:10:14.901622 | orchestrator | 2025-09-03 01:10:14 - wait for servers to be gone 2025-09-03 01:10:23.621581 | orchestrator | 2025-09-03 01:10:23 - clean up ports 2025-09-03 01:10:23.807507 | orchestrator | 2025-09-03 01:10:23 - 4510c842-e136-4424-8168-4a02ef44a7a7 2025-09-03 01:10:24.155249 | orchestrator | 2025-09-03 01:10:24 - b1a56d0d-6359-4d6b-9147-e670893768a8 2025-09-03 01:10:24.420252 | orchestrator | 2025-09-03 01:10:24 - b2159b41-0941-4c03-9a78-b2a2a2937b89 2025-09-03 01:10:24.632441 | orchestrator | 2025-09-03 01:10:24 - c24bc52a-c65e-449e-910d-4d4d426acfcd 2025-09-03 01:10:25.345450 | orchestrator | 2025-09-03 01:10:25 - d34bf0cd-4c03-4a54-9965-def67cf287bf 2025-09-03 01:10:25.744770 | orchestrator | 2025-09-03 01:10:25 - f1699e53-0f31-4c69-9849-52f86ddce7e7 2025-09-03 01:10:25.951284 | orchestrator | 2025-09-03 01:10:25 - fe3bf9e8-8907-4b44-b5ec-63905344372a 2025-09-03 01:10:26.181007 | orchestrator | 2025-09-03 01:10:26 - clean up volumes 2025-09-03 01:10:26.286258 | orchestrator | 2025-09-03 01:10:26 - testbed-volume-3-node-base 2025-09-03 01:10:26.327062 | orchestrator | 2025-09-03 01:10:26 - testbed-volume-5-node-base 2025-09-03 01:10:26.363866 | orchestrator | 2025-09-03 01:10:26 - testbed-volume-1-node-base 2025-09-03 01:10:26.406984 | orchestrator | 2025-09-03 01:10:26 - testbed-volume-4-node-base 2025-09-03 01:10:26.456609 | orchestrator | 2025-09-03 01:10:26 - testbed-volume-manager-base 2025-09-03 01:10:26.496442 | orchestrator | 2025-09-03 01:10:26 - testbed-volume-2-node-base 2025-09-03 01:10:26.538351 | orchestrator | 2025-09-03 01:10:26 - testbed-volume-0-node-base 2025-09-03 01:10:26.579640 | orchestrator | 2025-09-03 01:10:26 - testbed-volume-1-node-4 2025-09-03 01:10:26.619772 | orchestrator | 2025-09-03 01:10:26 - testbed-volume-3-node-3 2025-09-03 01:10:26.660455 | orchestrator | 2025-09-03 01:10:26 - testbed-volume-5-node-5 2025-09-03 01:10:26.702447 | orchestrator | 2025-09-03 01:10:26 - testbed-volume-2-node-5 2025-09-03 01:10:26.751736 | orchestrator | 2025-09-03 01:10:26 - testbed-volume-6-node-3 2025-09-03 01:10:26.794074 | orchestrator | 2025-09-03 01:10:26 - testbed-volume-4-node-4 2025-09-03 01:10:26.834310 | orchestrator | 2025-09-03 01:10:26 - testbed-volume-0-node-3 2025-09-03 01:10:26.875184 | orchestrator | 2025-09-03 01:10:26 - testbed-volume-8-node-5 2025-09-03 01:10:26.918229 | orchestrator | 2025-09-03 01:10:26 - testbed-volume-7-node-4 2025-09-03 01:10:26.957384 | orchestrator | 2025-09-03 01:10:26 - disconnect routers 2025-09-03 01:10:27.549033 | orchestrator | 2025-09-03 01:10:27 - testbed 2025-09-03 01:10:28.495584 | orchestrator | 2025-09-03 01:10:28 - clean up subnets 2025-09-03 01:10:28.548737 | orchestrator | 2025-09-03 01:10:28 - subnet-testbed-management 2025-09-03 01:10:28.710521 | orchestrator | 2025-09-03 01:10:28 - clean up networks 2025-09-03 01:10:28.890392 | orchestrator | 2025-09-03 01:10:28 - net-testbed-management 2025-09-03 01:10:29.191670 | orchestrator | 2025-09-03 01:10:29 - clean up security groups 2025-09-03 01:10:29.229320 | orchestrator | 2025-09-03 01:10:29 - testbed-node 2025-09-03 01:10:29.340285 | orchestrator | 2025-09-03 01:10:29 - testbed-management 2025-09-03 01:10:29.467089 | orchestrator | 2025-09-03 01:10:29 - clean up floating ips 2025-09-03 01:10:29.511194 | orchestrator | 2025-09-03 01:10:29 - 81.163.192.254 2025-09-03 01:10:29.880361 | orchestrator | 2025-09-03 01:10:29 - clean up routers 2025-09-03 01:10:29.981825 | orchestrator | 2025-09-03 01:10:29 - testbed 2025-09-03 01:10:30.948887 | orchestrator | ok: Runtime: 0:00:17.748354 2025-09-03 01:10:30.954173 | 2025-09-03 01:10:30.954421 | PLAY RECAP 2025-09-03 01:10:30.954527 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-03 01:10:30.954579 | 2025-09-03 01:10:31.097002 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-03 01:10:31.098067 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-03 01:10:31.832839 | 2025-09-03 01:10:31.833019 | PLAY [Cleanup play] 2025-09-03 01:10:31.849653 | 2025-09-03 01:10:31.849798 | TASK [Set cloud fact (Zuul deployment)] 2025-09-03 01:10:31.907319 | orchestrator | ok 2025-09-03 01:10:31.916284 | 2025-09-03 01:10:31.916434 | TASK [Set cloud fact (local deployment)] 2025-09-03 01:10:31.951147 | orchestrator | skipping: Conditional result was False 2025-09-03 01:10:31.967063 | 2025-09-03 01:10:31.967218 | TASK [Clean the cloud environment] 2025-09-03 01:10:33.074108 | orchestrator | 2025-09-03 01:10:33 - clean up servers 2025-09-03 01:10:33.569155 | orchestrator | 2025-09-03 01:10:33 - clean up keypairs 2025-09-03 01:10:33.588629 | orchestrator | 2025-09-03 01:10:33 - wait for servers to be gone 2025-09-03 01:10:33.636295 | orchestrator | 2025-09-03 01:10:33 - clean up ports 2025-09-03 01:10:33.707733 | orchestrator | 2025-09-03 01:10:33 - clean up volumes 2025-09-03 01:10:33.766854 | orchestrator | 2025-09-03 01:10:33 - disconnect routers 2025-09-03 01:10:33.795412 | orchestrator | 2025-09-03 01:10:33 - clean up subnets 2025-09-03 01:10:33.813332 | orchestrator | 2025-09-03 01:10:33 - clean up networks 2025-09-03 01:10:33.940490 | orchestrator | 2025-09-03 01:10:33 - clean up security groups 2025-09-03 01:10:33.976205 | orchestrator | 2025-09-03 01:10:33 - clean up floating ips 2025-09-03 01:10:34.002412 | orchestrator | 2025-09-03 01:10:34 - clean up routers 2025-09-03 01:10:34.504708 | orchestrator | ok: Runtime: 0:00:01.313745 2025-09-03 01:10:34.508853 | 2025-09-03 01:10:34.509050 | PLAY RECAP 2025-09-03 01:10:34.509176 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-03 01:10:34.509237 | 2025-09-03 01:10:34.634430 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-03 01:10:34.636850 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-03 01:10:35.359808 | 2025-09-03 01:10:35.359962 | PLAY [Base post-fetch] 2025-09-03 01:10:35.375428 | 2025-09-03 01:10:35.375553 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-03 01:10:35.420865 | orchestrator | skipping: Conditional result was False 2025-09-03 01:10:35.433410 | 2025-09-03 01:10:35.433610 | TASK [fetch-output : Set log path for single node] 2025-09-03 01:10:35.468800 | orchestrator | ok 2025-09-03 01:10:35.476389 | 2025-09-03 01:10:35.476517 | LOOP [fetch-output : Ensure local output dirs] 2025-09-03 01:10:35.953800 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/9b30fd534a0f43c8b8a0305e86d4e4b7/work/logs" 2025-09-03 01:10:36.207432 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/9b30fd534a0f43c8b8a0305e86d4e4b7/work/artifacts" 2025-09-03 01:10:36.479589 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/9b30fd534a0f43c8b8a0305e86d4e4b7/work/docs" 2025-09-03 01:10:36.494767 | 2025-09-03 01:10:36.494931 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-03 01:10:37.423751 | orchestrator | changed: .d..t...... ./ 2025-09-03 01:10:37.424098 | orchestrator | changed: All items complete 2025-09-03 01:10:37.424160 | 2025-09-03 01:10:38.121129 | orchestrator | changed: .d..t...... ./ 2025-09-03 01:10:38.842001 | orchestrator | changed: .d..t...... ./ 2025-09-03 01:10:38.868023 | 2025-09-03 01:10:38.868238 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-03 01:10:38.907826 | orchestrator | skipping: Conditional result was False 2025-09-03 01:10:38.910706 | orchestrator | skipping: Conditional result was False 2025-09-03 01:10:38.928646 | 2025-09-03 01:10:38.928758 | PLAY RECAP 2025-09-03 01:10:38.928835 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-09-03 01:10:38.928876 | 2025-09-03 01:10:39.056362 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-03 01:10:39.059675 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-03 01:10:39.810883 | 2025-09-03 01:10:39.811052 | PLAY [Base post] 2025-09-03 01:10:39.825356 | 2025-09-03 01:10:39.825487 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-03 01:10:40.767355 | orchestrator | changed 2025-09-03 01:10:40.780381 | 2025-09-03 01:10:40.780520 | PLAY RECAP 2025-09-03 01:10:40.780606 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-03 01:10:40.780694 | 2025-09-03 01:10:40.898136 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-03 01:10:40.900534 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-03 01:10:41.693441 | 2025-09-03 01:10:41.693606 | PLAY [Base post-logs] 2025-09-03 01:10:41.704245 | 2025-09-03 01:10:41.704372 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-03 01:10:42.168381 | localhost | changed 2025-09-03 01:10:42.187497 | 2025-09-03 01:10:42.187686 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-03 01:10:42.226133 | localhost | ok 2025-09-03 01:10:42.231903 | 2025-09-03 01:10:42.232102 | TASK [Set zuul-log-path fact] 2025-09-03 01:10:42.248522 | localhost | ok 2025-09-03 01:10:42.259632 | 2025-09-03 01:10:42.259752 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-03 01:10:42.285153 | localhost | ok 2025-09-03 01:10:42.289732 | 2025-09-03 01:10:42.289871 | TASK [upload-logs : Create log directories] 2025-09-03 01:10:42.783053 | localhost | changed 2025-09-03 01:10:42.785824 | 2025-09-03 01:10:42.785922 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-03 01:10:43.277007 | localhost -> localhost | ok: Runtime: 0:00:00.004179 2025-09-03 01:10:43.286175 | 2025-09-03 01:10:43.286360 | TASK [upload-logs : Upload logs to log server] 2025-09-03 01:10:43.863840 | localhost | Output suppressed because no_log was given 2025-09-03 01:10:43.867918 | 2025-09-03 01:10:43.868116 | LOOP [upload-logs : Compress console log and json output] 2025-09-03 01:10:43.921749 | localhost | skipping: Conditional result was False 2025-09-03 01:10:43.926762 | localhost | skipping: Conditional result was False 2025-09-03 01:10:43.939528 | 2025-09-03 01:10:43.939829 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-03 01:10:43.994330 | localhost | skipping: Conditional result was False 2025-09-03 01:10:43.994618 | 2025-09-03 01:10:43.999592 | localhost | skipping: Conditional result was False 2025-09-03 01:10:44.008101 | 2025-09-03 01:10:44.008228 | LOOP [upload-logs : Upload console log and json output]